article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
much prior work on resource allocation in cognitive radio networks has focused on the _ dynamic spectrum access _ model in which the secondary users seek transmission opportunities for their packets on vacant primary channels in frequency , time , or space . under this model ,the primary users are assumed to be oblivious of the presence of the secondary users and transmit whenever they have data to send .secondly , a collision model is assumed for the physical layer in which if a secondary user transmits on a busy primary channel , then there is a collision and both packets are lost .we considered a similar model in our prior work where the objective was to design an opportunistic scheduling policy for the secondary users that maximizes their throughput utility while providing tight reliability guarantees on the maximum number of collisions suffered by a primary user over _ any _ given time interval .we note that this formulation does not consider the possibility of any cooperation between the primary and secondary users .further , it assumes that the secondary user activity does not affect the primary user channel occupancy process .there is a growing body of work that investigates alternate models for the interaction between the primary and secondary users in a cognitive radio network . in particular ,the idea of cooperation at the physical layer has been considered from an information - theoretic perspective in many works ( see and the references therein ) .these are motivated by the work on the classical interference and relay channels . the main idea in these worksis that the resources of the secondary user can be utilized to improve the performance of the primary transmissions . in return , the secondary user can obtain more transmission opportunities for its own data when the primary channel is idle .these works mainly treat the problem from a physical layer / information - theoretic perspective and do not consider upper layer issues such as queueing dynamics , higher priority for primary user , etc .recent work that addresses some of these issues includes .specifically , considers the scenario where the secondary user acts as a relay for those packets of the primary user that it receives successfully but which are not received by the primary destination .it derives the stable throughput of the secondary user under this model . use a stackelberg game framework to study spectrum leasing strategies in _ cooperative cognitive radio networks _ where the primary users lease a portion of their licensed spectrum to secondary users in return for cooperative relaying . study and compare different physical layer strategies for relaying in such cognitive cooperative systems .an important consequence of this interaction between the primary and secondary users is that the secondary user activity can now potentially influence the primary user channel occupancy process .however , there has been little work in studying this scenario .exceptions include the work in that considers a two - user setting where collisions caused by the opportunistic transmissions of the secondary user result in retransmissions by the primary user . in this paper , we study the problem of opportunistic cooperation in cognitive networks from a _ network utility maximization _ perspective , specifically taking into account the above mentioned higher - layer aspects . to motivate the problem and illustrate the design issues involved , we first consider a simple network consisting of one primary and one secondary user and their respective access points in sec .[ section : femto_basic ] . this can model a practical scenario of recent interest , namely a cognitive femtocell , as discussed in sec .[ section : femto_basic ] .we assume that the secondary user can cooperatively transmit with the primary user to increase its transmission success probability . in return , the secondary user can get more opportunities for transmitting its own data when the primary user is idle .we formulate the problem of maximizing the secondary user throughput subject to time average power constraints in sec .[ section : femto_objective ] .unlike most of the prior work on resource allocation in cognitive radio networks , the evolution of the system state for this problem depends on the control actions taken by the secondary user .here , the system state refers to the channel occupancy state of the primary user . because of this dependence, this problem becomes a constrained markov decision problem ( mdp ) and the greedy `` drift - plus - penalty '' minimization technique of lyapunov optimization that we used in is no longer optimal .such problems are typically tackled using markov decision theory and dynamic programming .for example , uses these tools to derive structural results on optimal channel access strategies in a similar two - user setting where collisions caused by the opportunistic transmissions of the secondary user cause the primary user to retransmit its packets .however , this approach requires either extensive knowledge of the dynamics of the underlying network state ( such as state transition probabilities ) or learning based approaches that suffer from large convergence times . instead , in sec .[ section : solution ] , we use the recently developed framework of maximizing the _ ratio _ of the expected total reward over the expected length of a renewal frame to design a control algorithm .this framework extends the classical lyapunov optimization method to tackle a more general class of mdp problems where the system evolves over renewals and where the length of a renewal frame can be affected by the control decisions during that period .the resulting solution has the following structure : rather than minimizing a `` drift - plus - penalty '' term every slot , it minimizes a `` drift - plus - penalty ratio '' over each renewal frame .this can be achieved by solving a sequence of unconstrained _ stochastic shortest path _ ( ssp )problems and implementing the solution over every renewal frame .while solving such ssp problems can be simpler than the original constrained mdp , it may still require knowledge of the dynamics of the underlying network state .learning based techniques for solving such problems by sampling from the past observations have been considered in .however , these may suffer from large convergence times .remarkably , in sec . [ section : solving ] , we show that for our problem , the `` drift - plus - penalty ratio '' method results in an online control algorithm that _ does not require any knowledge of the network dynamics or explicit learning _ , yet is optimal . in this respect , it is similar to the traditional greedy `` drift - plus - penalty '' minimizing algorithms of .we then extend the basic model to incorporate multiple secondary users as well as time - varying channels in sec .[ section : femto_extensions ] . finally , we present simulation results in sec .[ section : femto_sim ] .we consider a network with one primary user ( pu ) , one secondary user ( su ) and their respective base stations ( bs ) .the primary user is the licensed owner of the channel while the secondary user tries to send its own data opportunistically when the channel is not being used by the primary user .this model can capture a femtocell scenario where the primary user is a legacy mobile user that communicates with the macro base station over licensed spectrum ( fig .[ fig : femto ] ) .the secondary user is the femtocell user that does not have any licensed spectrum of its own and tries to send data opportunistically to the femtocell base station over any vacant licensed spectrum .similar models of _ cooperative cognitive radio networks _ have been considered in .this can also model a single server queueing system with two classes of arrivals where one class has a strictly higher priority over the other class .we consider a time - slotted model .we assume that the system operates over a frame - based structure .specifically , the timeline can be divided into successive non - overlapping frames of duration \overset{\vartriangle}{=} ] .for each , the frame length ] such that can take any value between and .suppose the primary user is active in slot and the secondary user allocates power for cooperative transmission .then the random success / failure outcome of the primary transmission is given by an indicator variable and the success probability is given by .the function is known to the network controller and is assumed to be non - decreasing in .however , the value of the random outcome may not be known beforehand .note that setting corresponds to a non - cooperative transmission and the success probability for this case becomes and we denote this by .likewise , we denote by .thus , for all .we assume that is such that it can be supported even when the secondary user never cooperates , i.e. , .this means that the primary user queue is stable even if there is no cooperation .further , for all , the frame length \geq 1 ] , satisfies the following for all , regardless of the policy : \right\ } } \leq d \label{eq : second_moment}\end{aligned}\ ] ] this follows from the assumption that the primary user queue is stable even if there is no cooperation . in appendix c ,we exactly compute such a that satisfies ( [ eq : second_moment ] ) . when the primary user is idle in slot and the secondary user allocates power for its own transmission , it gets a service rate given by .this can represent the success probability of a secondary transmission with a bernoulli service process .this can also be used to model more general service processes .we assume that there exists a finite constant such that for all .given these control decisions , the primary and secondary user queues evolve as follows : + a_{pu}(t ) \label{eq : ch4_queues1 } \\q_{su}(t+1 ) = \max[q_{su}(t ) - \mu_{su}(p(t ) ) , 0 ] + r_{su}(t ) \label{eq : ch4_queues2}\end{aligned}\ ] ] where .consider any control algorithm that makes admission control decision and power allocation every slot subject to the constraints described in sec .[ section : femto_decisions ] .note that if the primary queue backlog , then this power is used for cooperative transmission with the primary user .if , then this power is used for the secondary user s own transmission . define the following time - averages under this algorithm : where the expectations above are with respect to the potential randomness of the control algorithm .assuming for the time being that these limits exist , our goal is to design a joint admission control and power allocation policy that maximizes the throughput of the secondary user subject to its average and peak power constraints and the scheduling constraints imposed by the basic model .formally , this can be stated as a stochastic optimization problem as follows : it will be useful to define the primary queue backlog as the `` state '' for this control problem .this is because the state of this queue ( being zero or nonzero ) affects the control options as described before .note that the control decisions on cooperation affect the dynamics of this queue .therefore , problem ( [ eq : cmdp ] ) is an instance of a constrained markov decision problem .it is well known that in order to obtain an optimal control policy , it is sufficient to consider only the class of stationary , randomized policies that take control actions only as a function of the current system state ( and independent of past history ) .a general control policy in this class is characterized by a stationary probability distribution over the control action set for each system state .let denote the optimal value of the objective in ( [ eq : cmdp ] ) .then using standard results on constrained markov decision problems , we have the following : ( optimal stationary , randomized policy ) : there exists a stationary , randomized policy _ stat _ that takes control decisions every slot purely as a ( possibly randomized ) function of the current state while satisfying the constraints for all and provides the following guarantees : where denote the time - averages under this policy . [ lem : femto_one ] we note that the conventional techniques to solve ( [ eq : cmdp ] ) that are based on dynamic programming require either extensive knowledge of the system dynamics or learning based approaches that suffer from large convergence times .motivated by the recently developed extension to the technique of lyapunov optimization in , we take an different approach to this problem in the next section .recall that the start of the frame , , is defined as the first slot when the primary user becomes idle after the `` pu busy '' period of the frame .let denote the secondary user queue backlog at time .also let be the power expenditure incurred by the secondary user in slot . for notational convenience , in the following we will denote by noting the dependence on is implicit .then the queueing dynamics of satisfies the following : \nonumber\\ & \qquad + \sum_{t = t_k}^{t_{k+1}-1 } r_{su}(t ) \label{eq : u_su}\end{aligned}\ ] ] where denotes the number of new packets admitted in slot and denotes the start of the frame .the above expression has an inequality because it may be possible to serve the packets admitted in the frame during that frame itself . in order to meet the time average power constraint, we make use of a virtual power queue which evolves over frames as follows : p_{avg } + \sum_{t = t_k}^{t_{k+1}-1 } p(t ) , 0 ] \label{eq : x_su}\end{aligned}\ ] ] where = t_{k+1 } - t_k ] is a ( random ) function of the control decisions taken during the frame . in order to construct an optimal dynamic control policy, we use the technique of where a ratio of `` drift - plus - penalty '' is maximized over every frame .specifically , let denote the queueing state of the system at the start of the frame . as a measure of the congestion in the system, we use a lyapunov function ] is influenced by the policy through the success probabilities that are determined by secondary user power selections .further recall that these success probabilities are different during the `` pu idle '' and `` pu busy '' periods of the frame .an explicit policy that maximizes this expectation is given in the next section ._ queue update _ : after implementing this policy , update the queues as in ( [ eq : ch4_queues2 ] ) and ( [ eq : x_su ] ) . from the above , it can be seen that the admission control part ( [ eq : femto_adm_ctrl ] ) is a simple threshold - based decision that does not require any knowledge of the arrival rates or . in the next section , we present an explicit solution to the maximizing policy for the resource allocation in ( [ eq : femto_resource_alloc ] ) and show that , remarkably , it also does not require knowledge of or and can be computed easily .we will then analyze the performance of the _ frame - based - drift - plus - penalty - algorithm _ in sec .[ section : femto_analysis ] .the policy that maximizes ( [ eq : femto_resource_alloc ] ) uses only two numbers that we call and , defined as follows . is given by the solution to the following optimization problem : let denote the value of the objective of ( [ eq : p_0_define ] ) under the optimal solution .then , is given by the solution to the following optimization problem : note that both ( [ eq : p_0_define ] ) and ( [ eq : p_1_define ] ) are simple optimization problems in a single variable and can be solved efficiently . given and , on every slot of frame , the policy that maximizes ( [ eq : femto_resource_alloc ] ) chooses power as follows : that is , the secondary user uses the constant power for its own transmission during the `` pu idle''period of the frame , and uses constant power for cooperative transmission during all slots of the `` pu busy''period of the frame . note that and can be computed easily based on the weights associated with frame , and do not require knowledge of the arrival rates . our proof that the above decisions maximize ( [ eq : femto_resource_alloc ] ) has the following parts :first , we show that the decisions that maximize the ratio of expectations in ( [ eq : femto_resource_alloc ] ) are the same as the optimal decisions in an equivalent infinite horizon markov decision problem ( mdp ) .next , we show that the solution to the infinite horizon mdp uses fixed power for each queue state ( for ) .then , we show that are the same for all .finally , we show that the optimal powers and are given as above . the detailed proof is given in the next section .recall that the _ frame - based - drift - plus - penalty - algorithm _ chooses a policy that maximizes the following ratio over every frame | \boldsymbol{q}(t_k)\right\ } } } \label{eq : resource_alloc1}\end{aligned}\ ] ] subject to the constraints described in sec .[ section : femto_basic ] .here we examine how to solve ( [ eq : resource_alloc1 ] ) in detail .first , define the state in any slot as the value of the primary user queue backlog in that slot .now let denote the class of stationary , randomized policies where every policy chooses a power allocation in each state according to a stationary distribution .it can be shown that it is sufficient to only consider policies in to maximize ( [ eq : resource_alloc1 ] ) .now suppose a policy is implemented on a _ recurrent _ system with fixed and and with the same state dynamics as our model .note that for all when the state .then , by basic renewal theory , we have that maximizing the ratio in ( [ eq : resource_alloc1 ] ) is equivalent to the following optimization problem : where is the resulting steady - state probability of being in state in the recurrent system under the stationary , randomized policy and where the expectations above are with respect to .note that well - defined steady - state probabilities exist for all because we have assumed that so that even if no cooperation is used , the primary queue is stable and the system is recurrent .thus , solving ( [ eq : resource_alloc1 ] ) is equivalent to solving the _ unconstrained time average maximization problem _ ( [ eq : femto_time_avg ] ) over the class of stationary , randomized policies .note that ( [ eq : femto_time_avg ] ) is an infinite horizon markov decision problem ( mdp ) over the state space .we study this problem in the following .consider the optimal stationary , randomized policy that maximizes the objective in ( [ eq : femto_time_avg ] ) .let denote the probability distribution over that is used by this policy to choose a power allocation in state .let denote the resulting effective probability of successful primary transmission in state .then we have that where denotes the probability of successful transmission in state when the secondary user spends power in cooperative transmission with the primary user .since the system is stable and has a well - defined steady - state distribution , we can write down the detail equations for the markov chain that describes the state transitions of the system as follows ( see fig .[ fig : bd ] ) : where denotes the steady - state probability of being in state under this policy .summing over all yields : the average power incurred in cooperative transmissions under this policy is given by : now consider an alternate stationary policy that uses the following fixed distribution for choosing control action in all states : let denote the resulting effective probability of a successful primary transmission in any state .note that this is same for all states by the definition ( [ eq : chi ] ) .then , we have that : let denote the steady - state probability of being in state under this alternate policy .note that the system is stable under this alternate policy as well .thus , using the detail equations for the markov chain that describes the state transitions of the system under this policy yields where we used ( [ eq : eq1_new ] ) in the last step .this implies that and therefore .also , the average power incurred in cooperative transmissions under this alternate policy is given by : where we used ( [ eq : eq2_new ] ) in the second last step and in the last step .thus , if we choose in state and choose as defined in ( [ eq : chi ] ) in all other states , it can be seen that _ the alternate policy achieves the same time average value of the objective ( [ eq : femto_time_avg ] ) as the optimal policy_. this implies that to maximize ( [ eq : femto_time_avg ] ) , it is sufficient to optimize over the class of stationary policies that use the _ same _ distribution for choosing for all states .denote this class by .then for all , we have that for all . using this andthe fact that , ( [ eq : femto_time_avg ] ) can be simplified as follows : \pi_0(r ) \nonumber\\ & - x_{su}(t_k ) { \mathbb{e}\left\{p_1(r)\right\ } } ( 1 - \pi_0(r ) ) \nonumber \\ \textrm{subject to : } \ ; & r \in \mathcal{r } ' \label{eq : time_avg2}\end{aligned}\ ] ] where is the resulting steady - state probability of being in state and where is the average power incurred in cooperative transmission in state ( same for all states ) .next , note that the control decisions taken by the secondary user in state do not affect the length of the frame and therefore .further , the expectations can be removed .therefore the first term in the problem above can be maximized separately as follows : this is the same as ( [ eq : p_0_define ] ) .let denote the optimal solution to ( [ eq : time_avg3 ] ) and let denote the value of the objective of ( [ eq : time_avg3 ] ) under the optimal solution .note that we must have that because the value of the objective when the secondary user chooses ( i.e. , stays idle ) is .then , ( [ eq : time_avg2 ] ) can be written as : the effective probability of a successful primary transmission in any state is given by . using little s theorem, we have . using this and rearranging the objective in ( [ eq : time_avg4 ] ) and ignoring the constant terms, we have the following equivalent problem : it can be shown that it is sufficient to consider only _ deterministic _ power allocations to solve ( [ eq : time_avg5 ] ) ( see , for example , ( * ? ? ? * section 7.3.2 ) ) .this yields the following problem : this is the same as ( [ eq : p_1_define ] ) .note that solving this problem does not require knowledge of or and can be solved easily for general power allocation options .we present an example that admits a particularly simple solution to this problem .suppose so that the secondary user can either cooperate with full power or not cooperate ( with power expenditure ) with the primary user .then , the optimal solution to ( [ eq : time_avg6 ] ) can be calculated by comparing the value of its objective for .this yields the following simple threshold - based rule : we also note that this threshold can be computed without any knowledge of the input rates . to summarize ,the overall solution to ( [ eq : femto_resource_alloc ] ) is given by the pair where denotes the power allocation used by the secondary user for its own transmission when the primary user is idle and denotes the power used by the secondary user for cooperative transmission .note that these values remain fixed for the entire duration of frame .however , these can change from one frame to another depending on the values of the queues .the computation of can be carried out using a two - step process as follows : 1 .first , compute by solving problem ( [ eq : time_avg3 ] ) .let be the value of the objective of ( [ eq : time_avg3 ] ) under the optimal solution .2 . then compute by solving problem ( [ eq : time_avg6 ] ) .it is interesting to note that in order to implement this algorithm , the secondary user does not require knowledge of the current queue backlog value of the primary user .rather , it only needs to know the values of its own queues and whether the current slot is in the `` pu idle '' or `` pu busy '' part of the frame .this is quite different from the conventional solution to the mdp ( [ eq : cmdp ] ) which is typically a different randomized policy for each value of the state ( i.e. , the primary queue backlog ) .to analyze the performance of the _ frame - based - drift - plus - penalty - algorithm _ , we compare its lyapunov drift with that of the optimal stationary , randomized policy _ stat _ of lemma [ lem : femto_one ] .first , note that by basic renewal theory , the performance guarantees provided by _ stat _ hold over every frame .specifically , let be the start of the frame .suppose _ stat _ is implemented over this frame .then the following hold : \right\ } } \upsilon^ * \label{eq : iid_1 } \\ & { \mathbb{e}\left\{\sum_{t = t_k}^{\hat{t}_{k+1}-1 } r^{stat}_{su}(t)\right\ } } \leq { \mathbb{e}\left\{\sum_{t = t_k}^{\hat{t}_{k+1}-1 } \mu^{stat}_{su}(t)\right\ } } \label{eq : iid_2 } \\ & { \mathbb{e}\left\ { \sum_{t = t_k}^{\hat{t}_{k+1}-1 } p^{stat}_{su}(t)\right\ } } \leq { \mathbb{e}\left\{\hat{t}[k]\right\ } } p_{avg } \label{eq : iid_3}\end{aligned}\ ] ] where and ] and .[ thm : femto_performance ] theorem [ thm : femto_performance ] shows that the time - average secondary user throughput can be pushed to within of the optimal value with a trade - off in the worst case queue backlog . by little s theorem, this leads to an utility - delay tradeoff .part ( 1 ) : we argue by induction .first , note that ( [ eq : u_max ] ) holds for .next , suppose for some .we will show that .we have two cases .first , suppose .then , by ( [ eq : u_su ] ) , the maximum that can increase is so that . next , suppose .then , the admission control decision ( [ eq : femto_adm_ctrl ] ) chooses .thus , by ( [ eq : u_su ] ) , we have that for this case as well .combining these two cases proves the bound ( [ eq : u_max ] ) . parts ( 2 ) and ( 3 ) : see appendix b.we consider two extensions to the basic model of sec . [ section : femto_basic ] .consider the scenario with one primary user as before , but with secondary users .the primary user channel occupancy process evolves as before where the secondary users can transmit their own data only when the primary user is idle .however , they may cooperatively transmit with the primary user to increase its transmission success probability . in general, multiple secondary users may cooperatively transmit with the primary in one timeslot .however , for simplicity , here we assume that at most one secondary user can take part in a cooperative transmission per slot .further , we also assume that at most one secondary user can transmit its data when the primary user is idle .our formulation can be easily extended to this scenario .let denote the set of power allocation options for secondary user .suppose each secondary user is subject to average and peak power constraints and respectively .also , let denote the success probability of the primary transmission when secondary user spends power in cooperative transmission .now consider the objective of maximizing the sum total throughput of the secondary users subject to each user s average and peak power constraints and the scheduling constraints of the model . in order to apply the `` drift - plus - penalty '' ratio method, we use the following queues : + \sum_{t = t_k}^{t_{k+1}-1 } r_{i}(t ) \label{eq : u_su_i } \\ & x_{i}(t_{k+1 } ) = \max[x_{i}(t_k ) - t[k ] p_{avg , i } + \sum_{t = t_k}^{t_{k+1}-1 } p_{i}(t ) , 0 ] \label{eq : x_su_i}\end{aligned}\ ] ] where is the queue backlog of secondary user at the beginning of the frame , is the service rate of secondary user in slot , and denote the number of new packets admitted and the power expenditure incurred by the secondary user in slot .finally , denotes the start of the frame and = t_{k+1 } - t_k ] and following the steps in sec .[ section : solution ] yields the following _ multi - user frame - based - drift - plus - penalty - algorithm_. in each frame , do the following : 1 ._ admission control _ : for all , for each secondary user , choose as follows : where is the number of new arrivals to secondary user in slot .resource allocation _ : choose a policy that maximizes the following ratio : | \boldsymbol{q}(t_k)\right\ } } } \label{eq : resource_alloc_i}\end{aligned}\ ] ] 3 ._ queue update _ : after implementing this policy , update the queues as in ( [ eq : u_su_i ] ) and ( [ eq : x_su_i ] ) .similar to the basic model , this algorithm can be implemented without any knowledge of the arrival rates or .further , using the techniques developed in sec .[ section : solving ] , it can be shown that the solution to ( [ eq : resource_alloc_i ] ) can be computed in two steps as follows .first , we solve the following problem for each : let denote the optimal solution to ( [ eq : time_avg3_i ] ) achieved by user and let denote the optimal objective value .this means user transmits on all idle slots of frame with power .next , to determine the optimal cooperative transmission strategy , we solve the following problem for each : let denote the optimal solution to ( [ eq : time_avg6_i ] ) achieved by user .this means user cooperatively transmits on all busy slots of frame with power .next , suppose there is an additional channel fading process that takes values from a finite set in an i.i.d fashion every slot .we assume that in every slot , = q_s ] , we get : next , using ( [ eq : sample2 ] ) in the right hand side of ( [ eq : ineq2_new ] ) yields : again using the fact that and ) = \frac{t[k](t[k]-1)}{2} ] .this proves ( [ eq : femto_utility_bound ] ) .here , we compute a finite that satisfies ( [ eq : second_moment ] ) . first , note that \right\}} ] and ] . in the following , we drop $ ] from the notation for convenience . using the independence of and , we have : we note that is a geometric r.v . with parameter .thus , and . to calculate , we apply little s theorem to get : this yields . to calculate , we use the observation that changing the service order of packets in the primary queue to preemptive lifo does not change the length of the busy period .however , with lifo scheduling , now equals the duration that the first packet stays in the queue . next ,suppose there are packets that interrupt the service of the first packet .let these be indexed as .we can relate to the service time of the first packet and the durations for which all these other packets stay in the queue as follows : here , denotes the duration for which packet stays in the queue . using the memoryless property of the i.i.d .arrival process of the primary packets as well as the i.i.d. nature of the service times , it follows that all the r.v.s are i.i.d . withthe _ same _ distribution as .further , they are independent of . squaring ( [ eq : b_k ] ) and taking expectations ,we get : note that is a geometric r.v . with parameter .thus and .also , . using these in ( [ eq : b_squared ] ) , we have : to calculate the last term , we have : note that given , is a binomial r.v . with parametersthus , we have : \\ & = \sum_{x \geq 1 } \big[(x \lambda_{pu})^2 + x \lambda_{pu}(1 - \lambda_{pu})\big ] ( 1-\phi_{nc})^{x-1}\phi_{nc } \\ & = \lambda_{pu}^2 \sum_{x \geq 1 } x^2 \phi_{nc } ( 1 -\phi_{nc})^{x-1 } \\ & + \lambda_{pu } ( 1 - \lambda_{pu } ) \sum_{x \geq 1 } x \phi_{nc } ( 1 -\phi_{nc})^{x-1 } \\ & = \lambda_{pu}^2 \frac{(2 - \phi_{nc})}{\phi_{nc}^2 } + \lambda_{pu } ( 1 - \lambda_{pu } ) \frac{1}{\phi_{nc}}\end{aligned}\ ] ] using this , we have : using this , we have : simplifying this yields : o. simeone , i. stanojev , s. savazzi , y. bar - ness , u. spagnolini , and r. pickholtz . spectrum leasing to cooperating secondary ad hoc networks . _ieee jsac special issue on cognitive radio : theory and applications _ , 26(1):203 - 213 , jan .2008 .i. krikidis , j. n. laneman , j. thompson , and s. mclaughlin . protocol design and throughput analysis for multi - user cognitive cooperative systems ._ ieee trans .wireless commun ._ , 8(9):4740 - 4751 , sept . 2009 . | we investigate opportunistic cooperation between unlicensed secondary users and legacy primary users in a cognitive radio network . specifically , we consider a model of a cognitive network where a secondary user can cooperatively transmit with the primary user in order to improve the latter s effective transmission rate . in return , the secondary user gets more opportunities for transmitting its own data when the primary user is idle . this kind of interaction between the primary and secondary users is different from the traditional _ dynamic spectrum access _ model in which the secondary users try to avoid interfering with the primary users while seeking transmission opportunities on vacant primary channels . in our model , the secondary users need to balance the desire to cooperate more ( to create more transmission opportunities ) with the need for maintaining sufficient energy levels for their own transmissions . such a model is applicable in the emerging area of cognitive femtocell networks . we formulate the problem of maximizing the secondary user throughput subject to a time average power constraint under these settings . this is a constrained markov decision problem and conventional solution techniques based on dynamic programming require either extensive knowledge of the system dynamics or learning based approaches that suffer from large convergence times . however , using the technique of lyapunov optimization , we design a novel _ greedy _ and _ online _ control algorithm that overcomes these challenges and is provably optimal . resource allocation , opportunistic cooperation , cognitive radio , femtocell networks , optimal control |
through its influence on the diffusion of spin - bearing particles , the physical characteristics of the medium leaves its signature on the observed nmr signal .this makes nmr a powerful probe into the microstructure of biological specimens as well as other porous media .nmr s sensitivity to diffusional processes can be controlled by introducing gradient waveforms into standard nmr experiments .the effect of free diffusion on experiments with general gradient waveforms has been fully characterized .however , the complexity of the environment typically leads to non - gaussian motion , which yields interesting features in the nmr signal decay that depend on the local structure .for example , a time - dependence in the diffusion coefficients tend to emerge in complex environments , while the large wavenumber regime of the signal decay could exhibit unique features as well .time dependence in diffusion coefficients was investigated extensively with emphasis on and ageing .it has also been studied as a possible indicator of mesoscopic disorder in materials and tissues , making use of diffusion - sensitive nmr images ( see ref . for a comprehensive review on diffusion in tissues ) .fractional brownian motion is yet another viable model in which to view the signatures of non - gaussian motion in nmr signals , which is amenable to path integral methods similar in that regard to the approach discussed in the present article . in this paper, we consider the case of diffusion taking place under the influence of a hookean force field .this problem is well - studied in the field of stochastic processes .to our knowledge , the first occurrence of it in the nmr literature is in a couple of papers published in 1960s , wherein the authors consider the case of `` diffusion near an attractive center '' and derive the signal expression for their then recent pulsed field gradient framework .since then callaghan and pinder used the formalism on a semi - dilute solution of an entangled polystyrene to model the displacements of the resulting network , which is stable during the timescale of the nmr measurement .le doussal and sen employed the problem as an `` artificial pore '' model to understand the effects of restricted diffusion in the presence of nonlinear magnetic field gradients . mitra andhalperin also touched upon the problem , referring to it as the `` parabolic pore '' in the context of the center - of - mass propagators . in this article, we have two main goals : ( i ) to provide an analytical framework with which one can derive explicit relationships for the signal decay for a general time - dependent gradient waveform , and ( ii ) develop semi - analytical and numerical tools , which could be generalized with relative ease to complicated potentials . as demonstrated below , our formulation naturally leads to a new characterization of diffusion anisotropy ; similar in spirit to the diffusion tensor model .however , while the latter tries to fit imaging data to a free diffusion model despite the underlying non - free diffusion , the present framework allows taking confinement into account , which is modeled as a harmonic force .the advantage of taking confinement into account is found in the time dependence of the predicted signal profile , which exhibits features more similar to that of actual restricted diffusion . employing a harmonic confinement instead of a direct implementation of restricted diffusion , on the other hand, offers a degree of tractability that encodes full anisotropy easily .for instance , compared to a restricted diffusion model involving a capped cylinder , which has _ two _ length parameters due to its partial isotropy , the confinement tensor generally has three distinct size parameters ( related to its eigenvalues ) that can capture full three - dimensional anisotropy , much like the diffusion tensor model .the article is organized as follows : in the next section , sec .[ sec : pathint ] , we express the nmr signal as a path integral and evaluate it assuming a hookean potential as stated above , providing a general expression that can be used with arbitrary gradient sequences . as such , we end the section by deriving analytical expressions for the expected nmr signal for the conventional pulsed field gradient ( stejskal - tanner ) and oscillating gradient sequences . for the sake of keeping the main text to the point , technicalities involved in evaluating the path integral are discussed in great detail in appendix [ app : pathint ] . in the subsequent section , sec .[ sec : mcf ] , we introduce the formulation of the semi - analytical multiple correlation function framework ( mcf ) , assuming a general confining potential .its application to the special case of a hookean potential is presented in appendix [ app : mcf ] , along with relevant technical details . in sec .[ sec : sim ] , the random walk simulations are described before proceeding with the validation of the ( semi-)analytical approaches and the deliberation of the hookean model as an alternative tool for the characterization of diffusion anisotropy in the results section , followed by concluding remarks .here , we derive an analytical expression via a path integral for the average transverse magnetization in a specimen , stemming from spin - bearing random walkers subject to a hookean potential . in order to isolate details of the derivation from the results ,we have placed most of the former in appendix [ app : pathint ] while restricting to the essential points in this section . in a medium subjected to a ( spatially ) non - uniform magnetic field , a diffusing spin - carrier experiences a rate of ( larmor ) precession that varies with respect to its spatial position . compared to a steady precession in the case of a uniform magnetic field, therefore , each particle picks up a phase in its rotation , depending on its random path .hence , from an ensemble of such random walkers in a ( time - dependent ) spatially linear magnetic field gradient of duration , an average signal of is obtained , which is the average transverse magnetization up to a unit . here , since the average is to be taken over all random paths , the expectation value has the form of a path integral , \ , { { \mathrm e}}^{- { { \mathrm i}}\gamma \int_0^{{{t_\mathrm{f } } } } { { \mathrm d}}t \ , \bm{g } ( t ) \cdot { \bm{r}}(t ) } \ ; , \label{eq : path.int}\end{aligned}\ ] ] where denotes an integral over the space of paths , and ] is most easily imagined as an infinite product of stepwise probabilities of transition from each intermediate path point to the next . in this article, we assume that the stochastic process underlying the random paths is diffusion with diffusivity under the effect of a _ dimensionless _hookean potential , where we refer to , having dimension of inverse length squared , as the _ confinement tensor_. the true tensorial force constant can be obtained by multiplying by the boltzmann constant and the absolute temperature .we argue that the potential defined above suffices to capture relevant features of diffusion in confined spaces .the advantage of the assumption is that the stepwise probabilities mentioned above have a simple form ( see appendix [ app : pathint ] ) and the resulting path weight is analytically tractable .we find that the nmr signal , or average transverse magnetization , that ensues the application of an arbitrary gradient waveform is expressed as where and is a matrix of rate constants ( inverse time ) , simply proportional to the confinement tensor .the inverse of this matrix encodes the time scales involved in equilibration , namely the approach of a distribution of diffusing particles toward the boltzmann distribution .essentially , it is the finite width of this final gaussian profile that is regarded as a measure of confinement in the present article , by matching it to the width of an actual confined geometry ( appendix [ sec : analytical1d ] ) . the results derived above can be employed to obtain exact expressions for the nmr signal intensity for any gradient waveform . in this section, we shall consider two widely employed experiments illustrated in figure [ fig : waveforms ] .the first sequence is stejskal and tanner s pulsed field gradient technique while the second one features an oscillatory ( sinusoidal ) profile . here, we use eqs . and to predict the nmr signal that should be recovered from spin - bearing random walkers subject to a three dimensional harmonic potential in a traditional pulsed field gradient experiment introduced by stejskal and tanner . as illustrated on the top panel in figure [ fig : waveforms ] ,the effective waveform comprises two gradient pulses in opposite directions with a delay between their leading edges .each pulse has magnitude and duration . due to the tractability of our `` harmonic approximation , '' an expression for the signalcan be obtained easily , valid for all possible values of the experimental parameters .the first thing to do is to evaluate the integral for , which is straightforward ( as noted before , one may wish to consider temporarily the eigenbasis ) .one finds ( taking the transpose for convenience ) , { { \mathrm e}}^{\bm{\omega } t } & , & 0 < t < \delta \\\left [ { { \mathrm e}}^{-\bm{\omega } ( \delta+\delta ) } - { { \mathrm e}}^{-\bm{\omega } \delta } \right ] { { \mathrm e}}^{\bm{\omega } t } & , & \delta < t < \delta \\ { { \mathrm e}}^{-\bm{\omega}(\delta+\delta ) } { { \mathrm e}}^{\bm{\omega } t } - \bm{1 } & , & \delta < t <\delta + \delta \\ 0 & , & \delta + \delta < t \end{array } \right . \;,\label{eq : q.1pfg } \end{aligned}\ ] ] where represents the identity matrix . what remains is the tedious task of squaring we of course mean .it is useful to note that all the matrices that will end up between and according to eqs . and commute ( since they all are eventually powers of ) and are symmetric , simplifying the algebra greatly . [ft : algebra ] ] and integrating this expression to evaluate the signal .after a bit of algebra and simplifications , we obtain the quadratic expression ( in ) where the real symmetric matrix \ .\label{eq : amatrix } \end{aligned}\ ] ] the last two expressions provide a model alternative to the diffusion tensor model commonly used in biomedical applications of magnetic resonance imaging .the main difference is the dependence of the predicted signal on the timing parameters of the sequence . in diffusion tensor imaging ( dti ), this dependence is the same as that for free ( gaussian ) diffusion . in the hookean potential model ,the dependence is consistent with restricted diffusion . as a check ,one may evaluate eqs . and to order , to obtain which is the anisotropic generalization of stejskal s result for a spherically symmetric harmonic potential , with the additional term . in the ( i.e. , ) limit , the stejskal - tanner expression for free diffusionis recovered as expected .next , we shall consider the sinusoidal waveform with angular frequency as depicted in the bottom of figure [ fig : waveforms ] .owing to the simplicity of diffusion inside a harmonic potential , we were able to derive an analytical expression for the average magnetization for this waveform as well .since the gradient is applied in a fixed direction , the formulation in appendix [ sec : analytical1d ] is sufficient to obtain the exact result .the gradient profile is of the form .we consider an experiment with full periods , i.e. , vanishes outside the interval .one finds , from eq . , that \ ; , \end{aligned}\ ] ] where .the signal then follows from eq . after some rearrangement as with this section , we shall visit the same problem using the multiple correlation function ( mcf ) formalism , which has been used in the past to characterize the effect of restricted diffusion on the nmr signal .technical details may be found in appendix [ app : mcf ] .we refer the reader to ref . for a recent review of the technique consistent with the notation here , and its relations with the path integral framework discussed in the previous section .the multiple correlation function formalism aims to compute the nmr signal from the time evolution of the ( appropriately - normalized ) magnetization density . in the presence of a potential , diffusion is governed by the smoluchowski equation , while the transverse magnetization carried by the random walkers need also be taken into account .this suggests that , under a magnetic field gradient waveform , the evolution of is governed by an equation akin to the bloch - torrey equation , which we shall refer to as the bloch - torrey - smoluchowski ( bts ) equation , where this time the diffusion term deals also with the potential .clearly , the above expression is reduced to the bloch - torrey equation when the potential is zero .the solution of the bts equation is best considered in the abstract function space , in terms of a propagator responsible for the evolution of the magnetization in time . due to the explicit time dependence of the operator on the right hand side of eq . , the associated propagator is formally a time - ordered product of operators of the form \} ] . also , time integrals become . following the definition of the variables and , the discretization amounts simply to where is the initial probability distribution of spins , and is the probability of a particle ending up at the _ later _ position given that it was at the _ earlier _ position a time ago .. ] as a result , the time - sliced path integral becomes where the subscript reminds us that this is not yet the exact path integral , but a discretized approximation .also , note how the integrals over later positions are nested inside those of the earlier ones .the above discussion holds for any ( time - independent ) potential and number of dimensions .however , evaluating the ( discretized ) path integral requires an explicit form for the _ propagator _ .we focus first on the one dimensional case with a parabolic potential .the propagator for diffusion under the effect of a dimensionless potential has the form \label{eq : propagator } \ .\end{aligned}\ ] ] with being the bare diffusion constant , the inverse time is defined as note that the propagator is a normalized gaussian in its later argument , centered at a point that depends on its earlier argument . despite its gaussian appearance , this propagator has an important characteristic that is consistent with restricted diffusion . to illustrate this point, we shall consider the long time limit ( ) of the propagator .in this regime , the dependence on the initial position disappears as expected for restricted diffusion .it is instructive to also consider the net displacements , i.e. , . let denote the equilibrium spin density .the second moment of net displacements , defined through the expression diverges for free diffusion as the diffusion time is prolonged .however , the same quantity for the propagator in eq . asymptotically approaches the constant value of . on the other hand ,if one considers a restricted diffusion process taking place between two infinite plates separated by a distance , this quantity similarly approaches a constant value , which is .setting the latter two quantities equal to each other , one can obtain an effective pore size given by we refer to this relationship in the results section .the form of the propagator helps a great deal in evaluating the signal analytically .indeed , the discretized path integral involves only familiar integrals , but each integration is affected by the result of that nested immediately inside it .consider , for instance , the inner - most integral ( over ) : the first factor has nothing to do with the remaining integrals in eq . .the second factor , on the other hand , combines with the exponential in the next integral ( over ) , shifting its `` wave number '' by an amount .one can see that , with later ( large index ) wave numbers leaking into the phase factors of earlier ( smaller index ) integrations as such , and getting multiplied by at each step , it should be useful to define ( the index is not to be confused with the imaginary unit . ) following this recursion just described , one finds we assume that the distribution of spin - carriers has had enough time to reach equilibrium before the gradient sequence is applied .hence , the relation can be employed ( eqs . in the limit ) , with being some arbitrary pre - initial position. we can thus perform the last remaining integration similarly to the previous ones to find in the continuum limit where as , we have and .recalling moreover that really stands for , the limit of the average magnetization is easily obtained as the quantity follows from eq . similarly . recalling that , and using eq ., the expression for can be recast as in the limit , one finds eqs . and yield the final result for the nmr signal for the case of molecules diffusing under the influence of a hookean restoring force in one dimension . here, we show that our expression for the signal agrees with previous results for free diffusion nmr signal \label{eq : e.free}\end{aligned}\ ] ] in the limit of vanishing potential ( ) . in this limit , , and thus the exponential in eq . drops . along with the gradient echo condition , , we have and .therefore , the first term in our expression for the signal matches the free diffusion nmr signal . the second term in eq .does not seem to disappear , due to the prefactor . however , if one recalls that this term is simply the remaining integral in eq . , then it is recognized that the integral becomes when , which is unity by definition . in three dimensions ,the most general harmonic potential ( assuming the attraction center is at the origin ) is of the form where the confinement tensor is real and can be assumed to be symmetric without loss of generality .there exists , then , a rotation matrix such that , and where is diagonal , and . in this basis denoted by tildes, the potential has the coordinates , , and decoupled , and so does the smoluchowski equation that describes the propagator .therefore , the propagator simply factorizes into three instances of the propagator for each direction : \ , \label{eq : propagator.3}\end{aligned}\ ] ] where eqs .become by virtue of this factorization , one sees that the evaluation of the path integral proceeds exactly the same way as before , only this time it is three - fold : with recall that the two equations above are valid in the rotated basis where the confinement tensor is diagonal ( hence the tildes ) . in order to revert to the lab frame ( the un - rotated basis ) , note that and are just the eigenvalues of the matrices and which are diagonalized by the same rotation as , and hence eq . .exploiting as well the rotation properties of , one finds where these are eqs . and of the main text .since one in general does not know what the principal directions are without previous knowledge about the specimen , these general expressions are more relevant than the diagonalized versions and .however , note that in evaluating the integral in eq . with a given gradient waveform , one may have to make sense of the exponentiated matrix in the integrand by going back and forth between the lab frame and the eigen - basis . | we study the influence of diffusion on nmr experiments when the molecules undergo random motion under the influence of a force field , and place special emphasis on parabolic ( hookean ) potentials . to this end , the problem is studied using path integral methods . explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients . the bloch - torrey equation , describing the temporal evolution of magnetization , is modified by incorporating potentials . a general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function ( mcf ) formalism , which has been used in the past to quantify the effects of restricted diffusion . both analytical and mcf results were found to be in agreement with random walk simulations . a multi - dimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy . unlike for the case of traditional methods that employ a diffusion tensor , anisotropy originates from the tensorial force constant , and bulk diffusivity is retained in the formulation . our findings suggest that some features of the nmr signal that have traditionally been attributed to restricted diffusion are accommodated by the hookean model . under certain conditions , the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems . |
squeezing and displacement are the basic operations of continuous variables quantum information , and are easily performed , the former by parametric amplifiers , the latter by lasers and linear optics . squeezing in the two - mode setupis , for example , the tool to generate entanglement in the braunstein - kimble teleportation scheme .the combined use of real squeezing and displacement allows one to encode efficiently classical information in quantum channels using homodyne detection at the receiver . in a quantum communication scenario where a coherent signal is sent through a non - linear medium andundergoes an amplification process , the joint estimation of displacement and squeezing provides a twofold information : the amplitude modulation of the state and a property of the communication channel itself .this may be useful in a communication scheme designed to be robust to photon loss .in this paper we consider the problem of jointly estimating real amplitude and real squeezing of the radiation field . in a tomographic setup ,where a large number of equally prepared copies is available , the maximum likelihood method turns out to be very efficient in estimating the parameters that characterize the state of the quantum system . in this approach ,first one fixes a set of single - copy measurements ( typically homodyne measurements at some random phase ) and then looks for the estimate that maximizes the probability ( density ) of producing the observed data .however , the tomographic approach is not suitable to the case when only a small number of copies is available , and one needs to use the limited resources at disposal more efficiently .it becomes then important to optimize not only the posterior processing of the experimental data , but also the choice of the measurement that is used to extract these data from the system .the most natural framework to deal with this situation is quantum estimation theory , where the concept of positive operator valued measure ( povm ) provides a tool to describe at the same time both measurement and data processing .the maximum likelihood approach in quantum estimation theory then corresponds to seek the measurement that maximizes the probability ( density ) that the estimated value of the parameters coincides with the true value .joint estimation of squeezing and displacement is equivalent to infer an unknown transformation of a group in the present case the affine group .this is an example of a frequent situation in quantum estimation , especially in communication problems , where a set of signal states is generated from a fixed input state by the action of a group .consider , for example , the case of phase estimation for high - sensitivity interferometry and optimal clocks , the estimation of rotation for the optimal alignment of reference frames , and the estimation of displacement of the radiation field for the detection of a coherent signal in gaussian noise . in these cases ,the symmetry of the problem provides a physical insight that allows one to simplify the search for efficient estimation strategies , and the concept of _ covariant _ measurement becomes crucial for optimization .the use of maximum likelihood approach in the covariant setting has been shown to be particularly successful in obtaining explicitly the optimal measurements and in understanding the fundamental mechanism that leads to the ultimate sensitivity of quantum measurements .group theoretical tools , such as equivalent representations and multiplicity spaces , far from being abstract technicalities , are the main ingredients to achieve the ultimate quantum limits for sensitivity . moreover , in a large number of situations , the maximization of the likelihood provides measurements that are optimal also according to a wide class of different figures of merit . from a group theoretical point of view, the action of real squeezing and displacement on the wavefunctions provides a unitary representation of the affine group `` '' of dilations and translations on the real line .the structure of the affine group underlies the theory of wavelets , and has been recently used in the characterization of coherent states , and in the study of oscillators in a morse potential .the affine group is particularly interesting , since it is the paradigmatic example of a nonunimodular group , namely a group where the left - invariant haar measure is different from the right - invariant one .this leads to orthogonality relations that differently from the usual schur lemmas involve a positive unbounded operator , firstly introduced by duflo , moore , and carey ( dmc ) in ref . .as we will see in this paper , the nonunimodularity of the group has some amazing consequences in the estimation problem .for example , the so - called square - root measurements , that are commonly considered in quantum communication and cryptography , do not coincide with the maximum - likelihood measurements , the latter providing a higher probability density of correct estimation .another bizarre feature is that for maximum likelihood measurements , the most likely value in the probability distribution can be different from the true one .while for unimodular groups this feature never happens , for nonunimodular groups it is unavoidable , and a suitable choice of the input states is needed to reduce the discrepancy between the true value and the most likely one .the paper is organized as follows . in order to set the optimal joint estimation of real squeezing and displacement in the general estimation method ,first we derive in sec .[ mlest ] the optimal measurement for the problem of estimating the unknown unitary action of a nonunimodular group . then , as a special example , the general results will be used to optimize the joint estimation of squeezing and displacement in sec .[ jointest ] .the efficiency of coherent states and displaced squeezed states is analyzed in detail , and the asymptotic relation between the uncertainties in the joint estimation and the uncertainties in the optimal separate measuremnts of squeezing and displacement is derived .the conclusions are summarized in sec .the explicit derivation of group average over the affine group is given in the appendix .suppose that a fixed input state , corresponding to a density operator in the hilbert space , is transformed by the unitary representation of the group , so that it generates the family of signal states the typical quantum estimation problem is then to find the measurement that gives the best estimate for the unknown transformation according to some optimality criterion .usually the criterion is given by a cost function , which quantifies the cost of estimating when the true value is , and enjoys the invariance property .for example , the maximum likelihood criterion corresponds to the delta cost - function ( loosely speaking , there is an infinite gain if the estimate coincides with the true value , and no gain otherwise ) .once a cost function is fixed , one can choose two possible approaches to optimization , namely the bayes approach and the minimax .in the bayes approach , one assumes some prior distribution of the unknown parameters , and then minimizes the average over the true values of the expected cost , where is the conditional probability density of estimating when the true value is . in the minimax approach ,one looks instead for the measurement that minimizes the supremum of the expected cost over all possible true values of the unknown parameters .an important class of estimation strategies is given by the _measurements , that are described by povms of the form where is an operator satisfying the normalization condition denoting the left - invariant haar measure on the group , namely . due to the symmetry of the set of states ( [ orbit ] ) , covariant measurements play a fundamental role in the search of the optimal estimation . for compact groups ,the following proposition holds : [ covminimax ] for compact groups , the search for the optimal measurement in the minimax approach can be restricted without loss of generality to the class of covariant measurements .moreover , for compact groups the optimality of covariant measurements holds also in the bayes approach , if the prior distribution is chosen to be the normalized haar measure on the group , i.e. the measure such that and . for non - compact groups , such as the affine group involved in the joint estimation of squeezing and displacement , the situation is more involvedof course , in the bayes approach it is no longer possible to choose the uniform haar measure as prior distribution , since it is not normalizable .however , in the minimax the optimality of covariant measurements still holds , even though in this case the proof becomes rather technical . in this paper we will adopt the minimax approach to deal with noncompact groups , and this will allow us to restrict the optimization to covariant povms .as we will see in section iii , the action of real squeezing and displacement on one mode of the radiation field yields a representation of the affine group `` '' of dilations and translations on the real line .this group is clearly non - compact , and , moreover , it is nonunimodular , namely the left - invariant haar measure ( with ) does not coincide with the right - invariant one ( with ) .therefore , to face the estimation problem with the affine group , we need some results about representation theory and orthogonality relations for nonunimodular groups ( for an introduction to these topics , see for example ) .let be a unitary representation of a locally compact group .in the following , we will make two assumptions on the representation that are tailored on the concrete problem of estimating real squeezing and displacement . _first assumption : discrete clebsch - gordan series ._ we require the representation to be a direct sum of irreducible representations ( _ irreps _ ) , namely that its clebsch - gordan series is discrete . in this case , there is a decomposition of the hilbert space as such that where is an irreducible representation acting on the hilbert space and is the identity in the space .the hilbert spaces and are called _ representation _ and _ multiplicity _ spaces , respectively , and the index labels the inequivalent irreps that appear in the clebsch - gordan series of the representation . _second assumption : square - summable irreps ._ we require each irreducible representation in eq .( [ repdecomp ] ) to be square - summable .this means that there is at least one non - zero vector such that vectors such that the above integral converges are called _ admissible_. it is possible to show that , if a representation is square - summable , then the set of admissible vectors is dense in the hilbert space .let us consider now the group average of an operator , defined as in general , the average may not converge for any operator ( for example , it diverges for ) . in analogy with admissible vectors, we say that is an _ admissible operator _ if the group average in eq .( [ defave ] ) converges in the weak operator sense .in such a case , one can prove that is given by ~,\ ] ] where is a positive self - adjoint operator acting on the representation space .the operator has been firstly introduced by duflo , moore , and carey ( dmc ) , and is the characteristic feature of nonunimodular groups .in fact , if the group is unimodular i.e .if the left- and right - invariant measures coincide than the dmc operator is simply a multiple of the identity , and the formula ( [ aveop ] ) for the group average is equivalent to the ordinary schur lemmas .contrarily , if the group is nonunimodular , the dmc operator is a positive unbounded operator , and its presence modifies the orthogonality relations dramatically , with remarkable consequences in the estimation of an unknown group transformation .the admissibility of vectors and operators has a simple characterization in terms of the dmc operator .as regards vectors , the set of admissible vectors for the irrep can be characterized as the domain of . for unimodular groups , since is proportional to the identity , the set of admissible vectors is the whole representation space , while for nonunimodular groups the admissible vectors form a dense subset of . as regards operators , an operator is admissible if and only if the partial traces ] . for a pure input state , the maximization of the likelihood over all possible operators satisfying the constraints ( [ trxi ] ) follows in a simple way by a repeated use of schwartz inequality .in fact , the input state can be written in the decomposition ( [ spacedecomp ] ) as is a bipartite state . from schwartz inequality, we have at this point , we assume that each bipartite state is in the domain of the operator .this assumption is not restrictive , since the domain of a self - adjoint operator is dense in the hilbert space . in this way , it is possible to write and to exploit the schmidt decomposition of the ( non - normalized ) vector .in other words , we can write where is the schmidt rank , are schmidt coefficients such that , and latexmath:[ ] for the optimal povm achieves its maximum at the value .again , this is true for unimodular groups , but fails to hold for nonunimodular groups .let the group be unimodular . if the covariant povm maximizes the likelihood for a given input state , then the probability distribution of the estimate on the state achieves its maximum for .suppose that the most likely value does not coincide with the true one .then we can rigidly shift the whole probability distribution with a post - processing operation that brings the most likely value to the true one .in fact , if the maximum of occurs at , we can always replace with a new covariant povm , where the normalization of the new povm follows from the fact that for unimodular groups the dmc operators are trivially proportional to the identity , and therefore the operator satisfy the normalization constraints ( [ trxi ] ) as well .moreover , the probability distribution associated with enjoys the property , whence it achieves the maximum in . in this way, the likelihood of would be higher than the likelihood of the povm .but this can not happen since is the optimal maximum - likelihood povm. therefore must be maximum in . for nonunimodular groups the previous argument does not apply , since the povm given by ( [ xiprime ] ) is no longer normalized .in fact , the operator does not satisfy the normalization constraints ( [ trxi ] ) , since the dmc operators do not commute with the unitaries .in other words , we are not allowed to bring the most likely value to coincide with the true one by rigidly shifting the whole probability distribution . as we will see in the explicit example of the estimation of real squeezing and displacement, this situation can indeed happen . in order to reduce the discrepancy between the true value and the most likely one ,a suitable choice of the input states is needed .for example , in the simple case of being a direct sum of inequivalent irreps , if the projection of the input state onto the irreducible subspaces are eigenvectors of the dmc operators , then the most likely value coincides with the true one .in fact , for any input state , using schwarz inequality , we have and if each is eigenvector of , then the last expression is equal to , then the true value is the most likely one .in the following we will apply the general framework of section [ mlest ] to the case of joint estimation of real squeezing and displacement of a single - mode radiation field with bosonic operators and with =1 ] .we thus have the heisenberg - robertson inequality where denotes the expectation value . from this point of view ,the displaced squeezed states ( [ dispsqueez ] ) are characterized asymptotically as minimum uncertainty states , since they saturate the inequality , and the product of uncertainties in the maximum likelihood estimation is exactly twice the heisenberg limit . to the best of our knowledge ,the present case is the first example of joint measurement of two harmonic - oscillator noncommuting observables whose commutator is not a -number .the results presented in the previous sections can be easily extended to include the estimation of the reflection on the real line that is realized by the parity operator . in this casewe are interested in estimating the three parameters in the transformation where can assume the values or .the representation is now irreducible in , and the associated dmc operator is . for a given input state , the optimal povm can be written as , where according to eq .( [ optetaineq ] ) . clearly , also in this case it is possible to enhance the sensitivity of detection by increasing the average value of the modulus , using coherent states or displaced squeezed states .a fundamental mechanism leading to the optimal estimation of group transformations is the use of equivalent representations of the group , via the technique of entanglement between representation spaces and multiplicity .such a strategy strongly improves the quality of estimation for unimodular groups , and in the case of compact groups is a way to obtain the best estimation of a unitary transformation .equivalent representations of the affine group can be obtained by entangling the radiation with a second reference mode , which is not affected by the unknown squeezing and displacement .this corresponds to considering the two - mode hilbert space and the representation of the affine group , namely the reference mode plays the role of an infinite dimensional multiplicity space . in this case, the optimal estimation can be obtained with the povm specified by the general formula ( [ opteta ] ) .a remarkable feature of the two - mode setup is that it is possible to have an _ orthogonal _povm for the estimation , namely there exists an ordinary observable on the extended hilbert space , associated with the joint measurement of squeezing and displacement .in fact , by defining the bipartite vectors we can construct the orthogonal povm the normalization follows straightforwardly from eq .( [ norm ] ) .moreover , we have the orthogonality relation where we used eq .( [ dsr ] ) of the appendix and the identity .the vectors in eq .( [ pointer ] ) are not normalizable . however , similarly to the case of the heterodyne operator , where physical states can arbitrarily well approximate the unnormalizable eigenstates , here one can consider , e.g. , states of the form is a normalization constant , and is the photon number operator for the auxiliary mode .the states approaches the optimal vectors ( [ pointer ] ) as , with a correspondent increasing to infinity of the average energy of the radiation field .in this paper we presented the joint estimation of real squeezing and displacement of the radiation field from the general point of view of group parameter estimation .the combination of squeezing and displacement provides a representation of the affine group `` '' , which is the paradigmatic example of a nonunimodular group . to deal with the concrete example of squeezing and displacement , we derived in the maximum likelihood approach the optimal estimation of a group transformation in the case of nonunimodular groups , providing explicitly the optimal povm for a given input state . in this analysis, some remarkable features of estimation showed up .firstly , while for unimodular groups the maximum likelihood measurements coincide with the usual square - root measurements , for nonunimodular groups the srm are no longer optimal , namely they do not maximize the probability density of detecting the correct value .moreover , for nonunimodular groups one can optimize the estimation strategy in the maximum likelihood approach , but even for the optimal povm the true value is not the one which is most likely to be detected . to reduce the discrepancy between the true value and the most likely one a suitable choice of the input states is required .both these features are in general unavoidable , and their origin tracks back to the presence of a positive unbounded operator the duflo - moore - carey operator in the orthogonality relations for nonunimodular groups . in the problem of joint estimating real squeezing and displacement , all the above effects occur .in particular , for coherent input states and displaced squeezed states we observed how an increase in the expectation value of gives rise to an improvement in the quality of estimation , along with a reduction of the discrepancy between the true value and the most likely one .in the mentioned cases the probability distributions for joint estimation become asymptotically gaussian , and the r.m.s .errors and can be easily calculated .remarkably , the product of uncertainties in the joint estimation is exactly twice the product of uncertainties for the optimal separate measurements of squeezing and displacement , in the same way as in the joint measurement of two conjugated quadratures .finally , the use of entanglement with an additional mode of the radiation field allows one to perform a von neumann measurement for the joint estimation of squeezing and displacement , in terms of an ordinary observable with continuous spectrum .the ideal input states for detecting an unknown affine transformation can then be approximated by normalizable states .aknowledges m. hayashi for pointing out ref. , and erato project for hospitality .m.f.s . acknowledges support from infm through the project no .this work has been supported by ministero italiano delluniversit e della ricerca ( miur ) through firb ( bando 2001 ) .here we derive the group theoretical structure of the representation of the affine group , by explicitly calculating the expression for the group average of an operator over the left - invariant measure .the clebsch - gordan series of contains two irreducible irreps , and .accordingly , the hilbert space can be decomposed as , and the projections onto the irreducible subspaces are and , respectively .the dmc operators are given by and .using twice the resolution of the identity in terms of the eigenstates of the quadrature operator , and the relation we can calculate the group average as follows - \pi \int _{ -\infty}^0 \d\tilde r \,|\tilde r \rangle \langle \tilde r| \,\tr[a \theta ( -y)/y]\nonumber \\ & & = \theta ( y ) ~\tr [ a~ \pi \theta ( y)/|y| ] + \theta ( -y ) ~\tr [ a~ \pi \theta ( -y)/|y| ] \;.\end{aligned}\ ] ] the thesis follows by comparing the last equation with the general formula ( [ aveop ] ) for the group average .99 s. l. braunstein and a. k. pati , _ quantum information with continuous variables _ ( kluwer academic , dordrecht , 2003 ) .s. l. braunstein and h. j. kimble , phys .. lett . * 80 * , 869 ( 1998 ) . c. m. caves and p. d. drummond , rev .phys . * 66 * , 481 ( 1994 ) .k. banaszek , g. m. dariano , m. g. a. paris , and m. f. sacchi , phys .a * 61 * , 010304(r ) ( 2000 ) .g. m. dariano , m. g. a. paris , and m. f. sacchi , phys .a * 62 * , 023815 ( 2000 ) ; phys .rev . a * 64 * , 019903(e ) ( 2001 ) . c. w. helstrom , _ quantum detection and estimation theory _( academic press , new york , 1976 ) .a. s. holevo , _ probabilistic and statistical aspects of quantum theory _( north holland , amsterdam , 1982 ) .a. s. holevo , j. multivariate anal .* 3 * , 337 ( 1973 ) .a. s. holevo , rep .* 16 * , 385 ( 1979 ) .g. m. dariano , c. macchiavello , and m. f. sacchi , phys .a * 248 * , 103 ( 1998 ) .v. buzek , r. derka , and s. massar , phys .lett . * 82 * , 2207 , ( 1999 ) .g. chiribella , g. m. dariano , p. perinotti , and m. f. sacchi , phys .lett . * 93 * , 180503 ( 2004 ) .h. p. yuen and m. lax , ieee trans .it * 19 * , 740 ( 1973 ) .g. chiribella , g. m. dariano , p. perinotti , and m. f. sacchi , phys .a * 70 * , 062105 ( 2004 ) .g. chiribella , g. m. dariano , p. perinotti , and m. f. sacchi , int .( in press ) , preprint quant - ph/0507007 .g. chiribella , g. m. dariano , and m. f. sacchi , phys .a * 72 * , 042338 ( 2005 ) .i. daubechies , _ ten lectures on wavelets _( siam , philadelphia , 1992 ) .m. hayashi and f. sakaguchi , j. phys . a * 33 * , 7793 ( 2000 ) .j. bertrand and m. irac - astaud , j. phys .a * 35 * , 7347 ( 2002 ) .b. molnr , m g benedict , and j. bertrand , j. phys .a * 34 * , 3139 ( 2001 ) .m. duflo and c. c. moore , j. funct .21 * , 209 ( 1976 ) ; a. l. carey , bull .soc . * 15 * , 1 ( 1976 ) .p. hausladen and w. k. wootters , j. mod . opt . * 41 * , 2385 ( 1994 ) .m. ozawa , in _ research reports on information sciences , series a : mathematical sciences , n. 74 _ , department of information sciences tokyo institute of technology ( 1980 ) .a. grossmann , j. morlet , and t. paul , j. math . phys . * 26 * , 10 ( 1985 ). m. ban , k. kurukowa , r. momose , and o. hirota , int . j. theor36 * , 1269 ( 1997 ) .m. sasaki , a. carlini , and a. chefles , j. phys .a * 34 * , 7017 ( 2001 ) .y. c. eldar and g. d. forney , ieee trans .it * 47 * , 858 ( 2001 ) .g. m. dariano and m. f. sacchi , phys .a. * 52 * , r4309 ( 1995 ) .g. chiribella , g. m. dariano , and m. f. sacchi , preprint quant - ph/0601103 .h. p. yuen , phys .lett . a * 91 * , 101 ( 1982 ) . | we study the problem of joint estimation of real squeezing and amplitude of the radiation field , deriving the measurement that maximizes the probability density of detecting the true value of the unknown parameters . more generally , we provide a solution for the problem of estimating the unknown unitary action of a nonunimodular group in the maximum likelihood approach . remarkably , in this case the optimal measurements do not coincide with the so called square - root measurements . in the case of squeezing and displacement we analyze in detail the sensitivity of estimation for coherent states and displaced squeezed states , deriving the asymptotic relation between the uncertainties in the joint estimation and the corresponding uncertainties in the optimal separate measurements of squeezing and displacement . a two - mode setup is also analyzed , showing how entanglement between optical modes can be used to approximate perfect estimation . |
we examine the results of a set of simulations of collisionless collapse of a stellar system and focus on the fit of the final quasi - equilibrium configurations by means of the so - called models ( see , ) .the models , constructed on the basis of theoretical arguments suggested by statistical mechanics , are associated with realistic luminosity and kinematic properties , suitable to represent elliptical galaxies .all the simulations have been performed using a new version of the original code of van albada ( see ; the code evolves a system of simulation particles interacting with one another via a mean field calculated from a spherical harmonic expansion of a smooth density distribution ) , starting from various initial conditions characterized by approximate spherical symmetry and by a small value of the virial parameter ( ; here and represent total kinetic and gravitational energy ) . from such initial conditions , the collisionless gravitational plasma " evolves undergoing incomplete violent relaxation .the models , in spite of their simplicity and of their spherical symmetry , turn out to describe well and in detail the end products of our simulations .we should note that the present formulation neglects the presence of dark matter .since there is convincing evidence for the presence of dark matter halos also in elliptical galaxies , this study should be extended to the case of two - component systems , before a satisfactory comparison with observed galaxies can eventually be claimed .most simulations have been run with particles ; the results have been checked against varying the number of particles .the units chosen are for mass , for length , and for time . in these unitsthe gravitational constant is , the mass of the model is , and the initial radius of the system .we focus on clumpy initial conditions .these turn out to lead to more realistic final configurations ( , ) .they can be interpreted as a way to simulate the merging of several smaller structures to form a galaxy .the clumps are uniformly distributed in space , within an approximate spherical symmetry ; the ellipticity at the beginning of the simulation is small , so that the corresponding initial shape would resemble that of galaxies .we performed several runs varying the number of clumps and the virial ratio in the range from to .usually the clumps are cold , i.e. their kinetic energy is all associated with the motion of their center of mass .the simulations are run up to time , when the system has settled into a quasi - equilibrium .the violent collapse phase takes place within .the final configurations are quasi - spherical , with shapes that resemble those of galaxies .the final equilibrium half - mass radius is basically independent of the value of .the central concentration achieved is a function of the initial virial parameter , as can be inferred from the conservation of maximum density in phase space .for a spherically symmetric kinetic system , extremizing the boltzmann entropy at fixed values of the total mass , of the total energy , and of an additional third quantity , defined as , leads ( ; see also ) to the distribution function }$ ] , where , , , and are positive real constants ; here and denote single - star specific energy and angular momentum . at fixed value of , one may think of these constants as providing two dimensional scales ( for example , and ) and one dimensionless parameter , such as . in the followingwe will focus on values of ranging from 3/8 to 1 .the corresponding models are constructed by solving the poisson equation for the self - consistent mean potential generated by the density distribution associated with . at fixed value of ,the models thus generated make a one - parameter family of equilibria , described by the concentration parameter , the dimensionless depth of the central potential well .the projected density profile of the models is very well fitted by the law ( for a definition of the law and of the effective radius , see ) , with the index varying in the range from to , depending on the precise value of and on the concentration parameter . at worst , the residuals from the law are less than mag over a range of more than mag , while in general the residuals are less than mag in a range of more than mag. the models are isotropic in the inner regions , while they are characterized by almost radial orbits in the outer parts .the local value of pressure anisotropy can be measured by .the form of the transition from isotropy ( ) to radial anisotropy ( ) is governed by the index : higher values of give rise to a sharper transition .the anisotropy radius , that is the location where the transition takes place ( with ) , is close to the half - mass radius of the models .in order to study the output of the simulations we compare the density and the anisotropy profiles , and , of the end products with the theoretical profiles of the family of models ; smooth simulation profiles are obtained by averaging over time , based on a total of snapshots taken from to . for the fitting models ,the parameter space explored is that of an equally spaced grid in , with a subdivision of in , from to , and of in , from to ; the mass and the half - mass radius of the models are fixed by the scales set by the simulations .a least analysis is performed , with the error bars estimated from the variance in the time average process used to obtain the smooth simulation profiles . a critical step in this fitting procedureis the choice of the relative weights for the density and the pressure anisotropy profiles .we adopted equal weights for the two terms , checking a posteriori that their contributions to are of the same order of magnitude .as exemplified by fig . [ fig : simc2 ] , the density of all the simulations is well represented by the best fit profile over the entire radial range .the fit is satisfactory not only in the outer parts , where the density falls under a threshold value that is _ nine orders of magnitude _ smaller than that of the central density , but also in the inner regions , where , in principle , there could be problems of numerical resolution .we have performed simulations with different numbers of particles , without noticing significant changes in the relevant profiles and in the quality of the fits .the successful comparison between models and simulations is also interesting because , depending on the initial virial parameter , the less concentrated end products are fitted by a density profile which , if projected along the line - of - sight , exhibits different values of in an best fit analysis . in turn , this may be interpreted in the framework of the proposed weak homology of elliptical galaxies . to some extent , the final anisotropy profiles for clumpy initial conditions are found to be sensitive to the detailed choice of initialization . in other words , runs starting from initial conditions with the same parameters , but with a different _ seed _ in the random number generator , give rise to slightly different profiles .the agreement between the simulation and the model profiles is good ( see fig . [fig : simc2 ] ) , except for about irregular " cases ; the origin of these is not clear , and we argue that they correspond to inefficient mixing in phase space during collapse ( a discussion of these issues will be provided in a separate paper , currently in preparation ) . at the level of phase space ,we have performed two types of comparison , one involving the energy density distribution and the other based on .the chosen normalization factors are such that : in fig .[ fig : simc2 ] we plot the final energy density distribution for a simulation run called ( a regular case ) with respect to the predictions of the best fit model identified from the study of the density and pressure anisotropy distributions .the agreement is very good , especially for the strongly bound particles . in some simulations ,the distribution of less bound particles shows some deviations from the theoretical expectations , especially in the irregular " cases , when the final exhibits a double peak ; one peak is around , as expected , while the other is located at the point where there was a peak in at the initial time .we interpret this feature as a signature of inefficient phase mixing . finally , at the deeper level of , simulations and models also agree very well , as illustrated in the right set of four panels of fig .[ fig : simc2 ] . for the caseshown , the distribution contour lines essentially coincide in the range from to ; however , the theoretical model shows a peak located near the origin , corresponding to an excess of weakly bound stars on almost radial orbits ., called ( starting from 20 cold clumps of radius ) , and the best fit model ( , ) ._ left set of four panels _ : density as measured from the simulation ( crosses ) vs. the best fit profile ( top left ) .residuals ( in magnitudes ) from the ( dotted line ) and from the ( with ) law for the best fit projected - density profile ( top right ) .anisotropy profile of the simulation ( crosses ) vs. the best fit profile ( bottom left ) .energy density distribution ( bottom right ) ._ right set of four panels _ : final phase space density ( left column ) , compared with that of the best fitting model ( right column ) .the model curve for and surface for have been computed by a monte carlo sampling of phase space .[ fig : simc2],title="fig : " ] , called ( starting from 20 cold clumps of radius ) , and the best fit model ( , ) . _ left set of four panels _ : density as measured from the simulation ( crosses ) vs. the best fit profile ( top left ). residuals ( in magnitudes ) from the ( dotted line ) and from the ( with ) law for the best fit projected - density profile ( top right ) .anisotropy profile of the simulation ( crosses ) vs. the best fit profile ( bottom left ) .energy density distribution ( bottom right ) ._ right set of four panels _ : final phase space density ( left column ) , compared with that of the best fitting model ( right column ) .the model curve for and surface for have been computed by a monte carlo sampling of phase space .[ fig : simc2],title="fig : " ] | n - body simulations of collisionless collapse have offered important clues to the construction of realistic stellar dynamical models of elliptical galaxies . such simulations confirm and quantify the qualitative expectation that rapid collapse of a self - gravitating collisionless system , initially cool and significantly far from equilibrium , leads to incomplete relaxation , that is to a quasi - equilibrium configuration characterized by isotropic , quasi - maxwellian distribution of stellar orbits in the inner regions and by radially biased anisotropic pressure in the outer parts . in earlier studies , as illustrated in a number of papers several years ago ( see and references therein ) , the attention was largely focused on the successful comparison between the models ( constructed under the qualitative clues offered by the n - body simulations mentioned above ) and the observations . in this paper we revisit the problem of incomplete violent relaxation , by making a direct comparison between the detailed properties of a family of distribution functions and those of the products of collisionless collapse found in n - body simulations . address = dipartimento di fisica , universit di milano , via celoria 16 , i-20133 milano , italy address = scuola normale superiore , piazza dei cavalieri 7 , i-56126 pisa , italy |
the standard model of particle physics and the standard model of cosmology are both rife with numerical parameters that must have values fixed by hand to explain the observed world .the world would be a radically different place if some of these constants took a different value .in particular , it has been argued that if any one of six ( or perhaps a few more ) numbers did not have rather particular values , then life as we know it would not be possible : atoms would not exist , or no gravitationally bound structures would form in the universe , or some other calamity would occur that would appear to make the ( alter)-universe a very dull and lifeless place .how , then , did we get so lucky as to be here ?this question is an interesting one because all of the possible answers to it that i have encountered or devised entail very interesting conclusions .an essentially exhaustive list of such answers is : 1 .we just got very lucky : all of the numbers could have been very different , in which case the universe would have been barren but they just happened by pure chance to take values in the tiny part of parameter space that would allow life .we owe our existence to one very , very , very lucky roll of the dice .we were nt particularly lucky : almost any set of parameters would have been fine , because life would find a way to arise in nearly any type of universe .this is quite interesting because it implies ( at least theoretically ) the existence of life forms radically different from our own , existing for example in universes with no atoms or with no bound structure , or overrun with black holes , etc .the universe was specifically designed for life .the choice of constants only happened once , but their values were determined in some way by the need for us to arise. this might be divine agency , or some radical form of wheeler s self - creating universe " , or super - advanced beings that travel back in time to set the constants at the beginning of the universe , etc .however the reader feels about this possibility , they must admit that it would be interesting if true .4 . we did not have to get lucky , because there are many universes with different sets of constants i.e. , the dice were rolled many , many times .we are necessarily in one of the universes that allows life , just as we necessarily reside on a planet that supports life , even when most others may not .this is interesting because it means that there are other very different universes coexisting with ours in a multiverse " .these four answers luck , _ elan vital _ , design , and multiverse will appeal at different levels to different readers .but i think it is hard to argue that the multiverse is necessarily less reasonable than the alternatives . moreover , as is discussed at length elsewhere in this volume , there are quite independent reasons to believe , on the basis of inflation , quantum cosmology , and string / m theory , that there might quite naturally be many regions larger than our observable universe , governed by different sets of low - energy physics .i am not aware of any independent scientific argument for the other three possible explanations . whether they are contemplated as an answer to the why are we lucky " question , or because they are forced upon us from other considerations ,multiverses come at a high price .even if we have in hand a physical theory and cosmological model that lead to a multiverse , how do we test it ? if there are many sets of constants , which ones do we compare to those we observe ?in the next section of this chapter i will outline what i think a sound prediction in a multiverse would look like .as will become clear , this requires many ingredients , and there are some quite serious difficulties in generating some of these ingredients , even with a full theory in hand . for this reason ,many short - cuts have been devised to try to make predictions more easily . in the third sectioni will describe a number of these , and show the cost that this convenience entails . finally , in section [ sec - coinc ]i will focus on the interesting question of whether the anthropic approach to cosmology might lead to any _ general _ conclusions about how the study of cosmology will look in coming years .imagine that we have a candidate physical theory and set of cosmological boundary conditions ( hereafter denoted ) that predicts an ensemble of physically realized systems , each of which is approximately homogeneous in some coordinates and can be characterized by a set of parameters ( i.e. the constants appearing in the standard models of particle physics and cosmology ; i assume here that the laws of physics themselves retain the same form ) .let us denote each such system a `` universe '' and the ensemble a `` multiverse '' .given that we can observe only one of these universes , what conclusions can we draw regarding the correctness of , and how ?one possibility would be if there were a parameter for which _none _ of the universes in the ensemble had the value we observe . in this case would be ruled out .( note that any in which at least one parameter has a range of values that it does not take in _ any _ universe is thus rigorously falsifiable , which is a nice thing for a theory to be ) . or perhaps some parameter takes only one value in all universes , and this value matches the observed one .this would obviously be a significant accomplishment of the theory .both possibilities are good as far as they go , and seem completely uncontroversial .but they do not go far enough .what if our observed parameter values appear in some but not all of the universes ?could we still rule out the theory if those values are incredibly rare , or gain confidence if they are extremely common ?i find it hard to see why not .if some theory predicts outcome a of some experiment with probability , and outcome b with probability , i think we would be reluctant to accept the theory if a single experiment were performed and showed outcome b , _ even if we did not get to repeat the experiment_. in fact , it seems consistent with all normal scientific methodology to rule out the theory at confidence the problem is just that without repeating our measurements we will not be able to _ increase _ this confidence .this seems to be exactly analogous to the multiverse _ if _ we can compute , given our , the _ probability _ that we should observe a given value for some observable .can we compute this probability distribution in a multiverse ? perhaps .i will argue that to do so in a sensible way , we would need seven successive ingredients . 1 . first , of course , we require a multiverse : an ensemble of regions , each of which would be considered a universe to observers inside it ( i.e. its properties would be uniform for as far as those observers could see ) , but each of which may have different properties .next we need to isolate the set of parameters characterizing the different universes. this might be the set of 20-odd free parameters in the standard model of particle physics ( see , e.g. , ref . and references therein ) , plus a dozen or so cosmological parameters . there might be additional parameters that become important in other universes , or differences ( such as different forms of the physical laws ) that can not be characterized by differences in a finite set of parameters .but for simplicity let us assume that some set of numbers ( where ) fully specify each universe .3 . given our parameters , we need some _ measure _ with which to calculate the multi - dimensional probability distribution for the parameters .we might , for example , count each universe equally " to obtain the probability , defined to be the chance that a randomly chosen universe from the ensemble would have the parameter values ., the probability that are all within the interval $ ] , where is a cumulative probability distribution . ] this can be a bit tricky , however , because it depends on how we delineate the universes : suppose that universes happen to be times larger than universes .what would then prevent us from splitting " each universe into 10 , or 100 , or universes , thus radically changing the relative probability of vs. ?these considerations might lead us to take a different measure such as volume , e.g. to define , the chance that a randomly point in space would reside in a universe with parameter values .but in an expanding universe volume increases , so this would depend on the time at which we choose to evaluate the volume in each universe .we might then consider some counting " object that endures , say a baryon ( which is relatively stable ) , and define , the chance that a randomly chosen baryon would reside in a universe with parameter values .but now we have excluded from consideration universes with no baryons .do we want to do that ?this will be addressed in step ( v ) . for now , note only that it is not entirely clear , even in principle , which measure we should place over our multiverse .we can call this the measure problem . " 4 .once we choose a measure object , we still need to actually compute , and this may be far from easy .for example , in computing , some universes may have infinite volume . in this case values of leading to universes with finite volume will have zero probability .how , though , do we compare two infinite volumes ?the difficulty can be seen by considering how we would count the fraction of red vs. blue marbles in an infinite box .we could pick one red , then one blue , and find a 50 - 50 split .but we could also repeatedly pick 1 red , then 2 blue , or 5 red , then 1 blue .we could do this forever and so obtain any ratio we like .what we would like to do is just grab a bunch of marbles at random " and count the ratio .but in the multiverse case it is not so clear how to perform this random ordering of marbles to pick .this difficulty , which might be termed the ordering problem " , has been discussed a number of times in the context of eternal inflation and a number of plausible prescriptions have been proposed .but there does not seem to be any generic solution , or convincing way to prove that one method is correct .if we have managed to calculate , do we have a prediction ?we have an answer to the question : given that i am ( or can associate myself with ) a randomly chosen -object , which sort of universe am i in ? " but this is _ not _ necessarily the same as the more general question : what sort of universe am i in ? " first of all , different -objects will generally give different probabilities , and they can not all be the answer to the same question .second , we may not be all that closely associated with our -object ( which was chosen mainly to provide _ some _ way to compute probabilities ) because it does not take into account important requirements for our existence .for example , if were volume , i would be asking what i should observe given that i am at a random point in space ; but we are _ not _ at a random point in space ( which would on average have a density of / cc ) , but rather at one of the very rare points with density / cc .the reason for this improbable situation is obviously anthropic " we just do not worry about it because we can observe many other regions at the proper density ( if we could not see such regions , we might be more reluctant to accept a cosmological model with such a low average density . ) finally , it might be argued that the question we have answered through our calculation is not nearly as specific a question as we could ask , because we know a lot more about the universe than that it contains volume , or baryons .we might , instead , ask given that i am in a universe with the properties we have already observed , what should i observe in the future ? "+ as discussed at length in , these different specific questions can be usefully thought of as arising from different choices of conditionalization .the probabilities are conditioned on as little as possible , whereas the anthropic question of given that i am a randomly chosen _ observer _, which should i measure " specifies probabilities conditioned on the existence of an observer " , while the approach of given what i know now , what will i see " specifies probabilities conditioned on being in a universe with all of the properties that we have already observed . these are three genuinely different approaches to making predictions in a multiverse that may be termed , respectively , bottom - up " , anthropic " , and top down " .+ let us denote by the conditionalization object used to specify these conditional probabilities . in bottom - up reasoning, it would be the same as the -object ; in the anthropic approach it would be an observer " , and in the top - down approach it could be a universe with the currently - known properties of our universe .it can be seen that they inhabit a spectrum , from the weakest conditionalization ( bottom - up ) to the most stringent ( top - down ) . like our initial -object , choosing a conditionalization is unavoidable and important , and there is no obviously correct choice to make .( see refs . for similar conditionalization schemas . ) 6 .having decided on a conditionalization object , the next step is to compute the number of of -objects per -object , for each set of values of the parameters .for example , if we have chosen to condition on observers , but have used baryons to define our probabilities , then we need to calculate the number of observers per baryon as a function of cosmological parameters .we can then calculate , i.e. the probability that a randomly chosen -object ( observer ) resides in a universe with parameters .there are a few possible pitfalls in doing this .first , if is infinite , then the procedure clearly breaks because then becomes undefined .this is why the -object should be chosen to requires as little as possible for its existence ( and hence be associated with the minimal - conditionalization bottom - up approach ) .this difficulty will generically occur if the existence of an -object does not necessarily entail the existence of an -object .for example , if the -object were a baryon but the -object were a bit of volume , then would be infinite for corresponding to universes with no baryons .the problem arises because baryons require volume to be in , but volume does not require a baryon to be in it .this seems straightforward , but gets much murkier when we consider the _ second _ difficulty when calculating , which is that we may not be able to precisely define what an -object is , or what it takes to make one .if we say that the -object is an observer , what exactly does that mean ? a human ? a carbon - based life form? can observers exist without water ?without heavy elements ?without baryons ?without volume ?it seems quite hard to say .we are forced , then , to choose some proxy for an observer , e.g. a galaxy , or a star with possible planets , etc .but our probabilities will perforce depend on the chosen proxy and this must be kept in mind .+ it is worth noting a small bit of good news here .if we do manage to consistently compute for some measure object , then insofar as we want to condition our probabilities on -objects , we have solved the measure problem : if we could consistently calculate for a different measure object , then we should obtain the same result for , i.e. .thus our choice of ( rather than ) becomes unimportant .the final step in making predictions is to make the assumption that the probability that we will measure some set of is given by the probability that a randomly chosen -object will .this assumption really entails two others : first , that we are some how directly associated with -objects , and second that we have not , simply by bad luck , observed highly improbable values of the parameters .the assumption that we are _ typical _ observers has been termed the principle of mediocrity " .one may argue about this assumption , but _ some _ assumption is necessary if we are to connect our computed probabilities to observations , and it is difficult to see what alternative assumption would be more reasonable. the result of all this work would be the probability that a randomly selected -object ( out of all of the -objects that exist in multiverse ) would reside in a universe governed by parameters , along with a reason to believe that this same probability distribution should govern what we will observe .we can then make the observations ( or consider some already - made ones ) .if the observations are highly improbable according to our predictions , we can rule out the candidate at some confidence that depends on how improbable our observations were .apart from the manifest and grave difficulties involved in actually completing the seven listed steps in a convincing way , i think the only real criticism that can be leveled at this approach is that unless for our observed paramters , there will always be the chance that the was correct and we measured an unlikely result . usually , we can rid ourselves of this problem by repeating our experiments to make as small as we like ( at least in principle ) , while here we do not have that option once we have used up " the measurement of all of the paramters required to describe our universe ( which appears to be rather surprisingly few , at least according to current theories ) , we are done .although the idea of a multiverse has been around for quite a while , no one has ever really come close to making the sort of calculation outlined in the previous section . instead ,those wishing to make predictions in a multiverse context have made strong assumptions about which parameters actually vary across the ensemble , about the choice of -object , and about the quantities and that go into predicting their probabilities for measurement .some of these shortcuts aim simply to make a calculation tractable ; others are efforts to avoid anthropic considerations , or alternatively to use anthropic considerations to avoid other difficulties .i would not have listed any ingredients that i thought could be omitted from a really sound calculation , thus all of these shortcuts are necessarily incomplete ( some , in my opinion , disastrously so ) . but by listing and discussing them , i hope to give the reader both a flavor for what sort of anthropic ( or anthropic - esque ) arguments have been made in the literature , and where they may potentially go astray .this assumption underlies a sort of anthropic reasoning that has earned the anthropic principle a lot of ill will .it goes something like : `` let s assume that lots of universes governed by lots of different parameter values exist .then since only universes with parameter values almost exactly the same as ours allow life , we must be in one of those , and we should not find it strange if our parameter values seem special . '' in the conventions i have described , this is essentially equivalent setting -objects to be observers , then hoping that the hospitality factor " is very narrowly peaked around one particular set of parameters . in this case , the _ a priori _ probabilities are pretty much irrelevant because the shape of will pick out just one set of parameters .because our observed values definitely allow observers , the allowed set must then be very near .three problems with this type of reasoning are as follows .first , it is rather circular : it entails picking the -object to be an observer , but then quickly substituting a universe just like ours " for the -object , with the reasoning that such universes will definitely support life .thus we have arrived at : the universe we observe should be pretty much like the observed universe .the way to avoid this silliness is to allow at least the possibility that there are life - supporting universes with , i.e. to discard the unproven _ assumption _ that has a single , dominant , narrow peak .the second problem is that if were really _ so _ narrowly peaked as to render irrelevant , then we would be in serious trouble as theorists , because we would lose any ability to distinguish between candidates for our fundamental theory : unless our observed universe is _ impossible _ in the theory , then the anthropic factor would force the predictions of the theory to match our observations . as discussed in the next section , this is not good .the third problem with being an extremely peaked function is that it does not appear to be true !as discussed in below , it appears that for any reasonable surrogate for observers ( e.g. galaxies like ours , or stars with heavy elements , etc . ) , calculations done using our current understanding of galaxy and structure formation indicate that the region of parameter space in which there can be many of those objects may be small compared to the full parameter space , but it is much larger than the region compatible with our observations .as mentioned in section 2 , there is a ( relatively ! ) easy thing to do with a multiverse theory : work out which parameters combinations can not occur in _ any _ universe .if the combination we actually observe is one of these , then the theory is ruled out .this is unobjectionable , but a rather weak way to test a theory because given two theories that are _ not _ ruled out , we have no way whatsoever of judging one to be better , even if the parameter values we observe are in some sense generic in one and absurdly rare in the other . , this approach which makes _ no _ assumptions about is equivalent to the approach just described of making the very strong assumption that allows only one specific set of parameter values , because in either case a theory can only be ruled out if our observed values are impossible in that theory . ]this is not how science usually works .for example , suppose our theory is that a certain coin - tossing process is unbiased . if our only way to test this theory was to look for experimental outcomes that are impossible , then the theory would unfalsifiable : we would have to accept it theory for _ any _ coin we are confronted with , because _ no _ sequence of tosses would be impossible in it ! even if 10,000 tosses in a rowall came up heads , we would have no grounds for doubting our theory because while getting heads 10,000 times in a row on a fair coin is absurdly improbably , it is not impossible .nor would we have reason to prefer the ( seemingly much better ) nearly every toss comes out heads " theory . clearly this is a situation we would like to improve on , as much in universes as in coin tosses .one possible improvement would be to assume that we will observe a typical " set of parameters in the ensemble , i.e. that we will employ bottom - up " reasoning as described in the first section , by using the _ a priori _ ( or prior " ) probabilities for some choice of measure - object such as universes , and just ignore the conditionalization factor .there are two possible justifications for this .first , we might simply want to avoid any sort of anthropic issues on principle .second , we might hope that some parameter values are much , much more common than others , to the extent that the -factor becomes irrelevant in other words that ( rather than ) is a very strongly peaked around some particular parameters .the problem with this approach is the measure problem " discussed above : there is an implicit choice of basing probabilities on universes ( say ) rather than on ( say ) volume elements or baryons .each of these measures has problems for example , it seems that probabilities based on universes " depends on how the universes are delineated , which can be ambiguous .moreover there seems to be no reason to believe that predictions made using any two measures should agree particularly well .for example , as discussed elsewhere in this volume , in the string theory landcape " there are many possible parameter sets , depending on which metastable minimum one chooses in a potential that depends in turn on a number of fluxes that can take a large range of discrete values .imagine that exponentially many more minima lead to than lead to .should we expect to observe ?not necessarily , because the relative number of -universes vs. -universes that _ actually come into existence _ may easily differ exponentially from the relative number of -minima vs. -minima .( this seems likely to me in an eternal - inflation context , where the relative number universes could depend on exponentially - suppressed tunnelings between vacua . )worse yet , these may in turn differ exponentially ( or even by an infinite factor ) from the relative numbers of baryons , or relative volumes . in short , while we are free to use bottom - up reasoning with any choice of measure object we like , we are not free to assert that other choices would give similar predictions , or that conditionalization can be rendered irrelevant .so we had better have a pretty good reason for the choice we make .another way in which one might hope to circumvent anthropic issues is to condition the probabilities on some or all observations that have already been made . in this top - down " ( or perhaps pragmatic " ) approachwe ask : given everything that has been observed so far , what will we observe in some future measurement ?it has a certain appeal , as this is often what is done in experimental science : we do not try to _ predict _ what our laboratory will look like , just what will happen given that the lab is in a particular state at a given time . in the conventions of section 2 ,the approach could consist of choosing the -object to be universes with parameters agreeing with the measured values .while appealing , this approach suffers some deficiencies : * it still does not completely avoid the measure problem , because even once we have limited our consideration to universes that match our current observations , we must still choose a measure with which to calculate the probabilities for the remaining ones . * through our conditioning , we may accept theories for which our parameter values are wildly improbable , without supplying any justification as to why we observe such improbable values .this is rather strange .imagine that i have a theory in which the cosmological constant is ( with very high probability ) much higher than we observe , and the dark matter particle mass is almost certainly .i condition on our observed , simply accepting that i am in an unusual universe .now say i measure .i would like to say my theory is ruled out .fine , but here is where it gets odd : according to top - down reasoning , i should also have already ruled it out if i had done my calculation in 1997 , before was measured . andsomeone who invented the very same theory next week but had not been told that i have already ruled it out would _ not _ rule it out , but instead just take the low value of ( along with the observed ) as part of the conditionalization ! * if we condition on everything we have observed , we obviously give up the possibility of _ explaining _ anything we have observed ( which at this point is quite a lot in cosmology ) through our theory .the last two issues motivate variations on the top - down approach in which only some current observations are conditioned on .two of which i am aware are : 1 .we might _ start _ by conditioning on all observations , then progressively condition on less and less and try to predict " the things we have decided not to condition on ( as well , of course , as any _ new _ observations) .the more we can predict , the better our theory is . the problem is that either ( a ) we will get to the point where we are conditioning on as little as possible ( the bottom - up approach ) , and hence the whole conditionalization process will have been a waste of time , or ( b ) we will still have to condition on some things , and admit either that these have an anthropic explanation , or that we just choose to condition on them ( leading to the funny issues discussed above ) .2 . we might choose at the outset to condition on things that we think may be fixed anthropically ( without trying to actually generate this explanation ) , then try to predict the others .this is nice in being relatively easy , and in providing a justification for the conditionalization .it suffers from the problems of ( a ) guessing which parameters are anthropically important and which are not , ( b ) even if a parameter is anthropically unimportant , it may be strongly correlated in with one that is , and ( c ) we still have to face the measure problem , which we can not avoid by counting conditioning on observers , because we are avoiding anthropic considerations .most of the shortcuts " discussed so far have been attempts to avoid anthropic considerations .but we may , instead , consider how me might try to formulate an anthropic prediction ( or explanation ) for some observable , without going through the full calculation outlined in section 2 .the way of doing this that has been employed in the literature ( largely in the efforts of vilenkin and collaborators ) is as follows .first , one fixes all but one ( or perhaps two ) of the parameters to the observed values .this is done for tractability and/or because one hopes that they will have non - anthropic explanations .let us call the parameter that is allowed to vary across the ensemble .next , an -object is chosen such that _ given that only varies _ , it is hoped that ( a ) the number of these objects ( per baryon , or per comoving volume element ) in a given universe is calculable , and ( b ) this number is arguably proportional to the number of observers .for example , if only varies across the ensemble , galaxies might make reasonable -objects because a moderately different will probably not change the number of observers per galaxy , but _ will _ change the number of galaxies in a way that can be computed using fairly well - understood theories of galaxy and structure formation ( for examples see ) .third , it is assumed that is either flat or a simple power - law , without any complicated structure .this can be done just for simplicity , but it is often argued to be natural . the flavor of this argument is as follows . if is to have interesting structure over the relatively small range in which observers are abundant , there must be a parameter of order the observed in the expression for .but precisely this absence is what motivated the anthropic approach .for example , if the expression for contained the energy scale corresponding to the observed , the origin of that energy scale would probably be more interesting than our anthropic argument , as it would provide the basis for a ( non - anthropic ) solution to the cosmological constant problem ! under these ( fairly strong ) assumptions we can then actually calculate and see whether or not the observed value is reasonably probable given this predicted distribution . for examplewhen alone is varied , a randomly chosen galaxy is predicted to lie in a universe with comparable to ( but somewhat larger than ) the value we see . with . for ,higher values would be predicted , and with lower values would be favored . ]i actually think this sort of reasoning is pretty respectable , _ given the assumptions made_. in particular , the anthropic argument in which _ only _ varies is a relatively clean one .but there are a number of pitfalls when it is applied to parameters other than , or when one allows multiple parameters to vary simultaneously . *assuming that the abundance of observers is strictly proportional to that of galaxies only makes sense if the number of galaxies and not their properties changes as varies .however , changing nearly any cosmological parameter will change the properties of typical galaxies .for example , increasing will decrease galaxy numbers , but also make galaxies smaller on average , because a high squelches structure formation at late times when massive galaxies form .increasing the amplitude of primordial perturbations would similarly lead to smaller , denser but more numerous galaxies , as would increasing ratio of dark matter to baryons . in these cases, we must specify in more detail what properties an observer - supporting galaxy should have , and this is very difficult to do without falling into the circular - argument trap of assuming that only galaxies like ours support life .finally , this sort of strategy seems unlikely to work if we try to change _non_-cosmological parameters , as this could lead to radically different physics and the necessity of thinking _ very _ hard about what sort of observers there might be .* the predicted probability distribution clearly depends on , and the assumption that is flat , or a simple power law , can break down .this can happen even for but perhaps more naturally for other parameters such as the dark matter density for which particle physics models can already yield sensible values .moreover , this breakdown is much more probable if ( as discussed below and contrary to the assumption made above ) the hospitality factor is significant over many orders of magnitude in .* calculations of the hospitality factor can go awry if is changed more than a little .for example , a neutrino mass slightly larger than we observe would suppress galaxy formation by erasing small - scale structure .but neutrinos with a large ( ) mass would act as dark matter and lead to strong halo formation .whether these galaxies would be hospitable is questionable ( they would be very baryon - poor ) , but the point is that the physics becomes _ qualitatively _ different . as another example , a lower photon / baryon ratio would lead to earlier - forming , denser galaxies . but a _ much _ smaller value would lead to qualitatively different structure formation , as well as the primordial generation of heavy elements . as discussed at length in ref . , these changes are very dangerous because over orders of magnitude in , will tend to change by many orders of magnitude .thus even if these alter - universes only have a few observers in them , they may dominate and hence qualitatively change the predictions .* along the same lines , but perhaps even more pernicious , when multiple parameters are varied simultaneously , the effects of some variations can offset the effect of others so that universes quite different from ours can support many of our chosen -objects . for example , increasing cuts off galaxy formation at a given cosmic density , but raising the perturbation amplitude causes galaxies to form earlier ( thus nullifying the effect of ) .this can be seen in the calculations of , and is discussed explicitly in .many such deneneracies exist , because rasing , , or the neutrino mass all decrease the efficiency of structure formation , while raising or increase the efficiency . as an extreme case, it was shown in that if and are allowed to vary with , then universes with of times our observed value could arguably support observers ! including more cosmological parameters , or non - cosmological parameters , can only make this problem worse .these problems indicate that while anthropic arguments concerning in the literature are relatively clean " , it is unclear whether other parameters ( taken individually ) will work as nicely .more importantly , a number of issues arise when several parameters are allowed to vary at once , and there does not seem to be any reason to believe that success in explaining one parameter anthropically will persist when additional parameters are allowed to vary . in some cases , it may : for example , allowing neutrino masses to vary in addition to does not appear to spoil the anthropic explanation of a small but nonzero cosmological constant .on the other hand , allowing to vary does , unless is strongly peaked at small values of .i suspect that allowing or to vary along with would have a similar effect .for those serious about making predictions in a multiverse , i would propose that rather than working to generate additional incomplete anthropic arguments by taking shortcuts , a much better job must be done in each of the individual ingredients .for one example , our understanding of galaxy formation is sufficiently strong that the multi - dimensional hospitality factor could probably be computed for within a few orders of magnitude of the observed values , for of galaxies with properties within some range .second , despite some nice previous work , i think the problem of how to compute in eternal inflation is a pretty open one . finally , the string / m theory landscape ( which is generating a lot of interest in the present topic right now ) can not hope to say much of anything about until its place in cosmology is understood in particular , we need both a better understanding of the statistical distribution of field values that result from evolution in a given potential , and also an understanding of how transitions between vacua with different flux values occur , and exactly what is transitioning .the preceding sections should have suggested to the reader that it will be a huge project to compute a sound prediction of cosmological and physical parameters from a multiverse theory in which they vary .it may be so hard that it will be a very long time before any such calculation is at all believable .it is worth asking then : is there any way nature might give us an indication as to whether the anthropic approach is a sensible one , i.e. does the anthropic approach make any sort of _ general _ predictions even without the full calculation of ?interestingly , i think the answer might be yes : i am aware of two such general ( though somewhat vague ) predictions of the anthropic approach . to understand the first , assume that only one parameter , , varies , and consider , the probability distribution in , given by some theory .for near the observed value , can basically only be doing one of three things : it can rise with , fall with , or be approximately constant . in the first two cases ,the theory would predict that we should see a value of that is , respectively , higher or lower than we actually do _ if _ no anthropic conditionalization is applied .now suppose we somehow compute and find that it falls off quickly for values of much smaller or larger than we observe , i.e. that only a range is anthropically acceptable " .then we have an anthropic argument explaining , because this falloff means that will only be significant near . but now note that _ within _ the anthropically acceptable range , will be peaked near if is increasing with , or near if is decreasing with .that is , we should expect at one edge of the anthropically acceptable range .this idea has been called the principle of living dangerously " .it asserts that for a parameter that is anthropically determined , we should expect that a calculation of would reveal that observers would be strongly suppressed either for slightly larger or slightly smaller than , depending on whether is rising or falling .now , this is not a very specific prediction : exactly where we would expect to lie depends both on how steep is , and how sharp the cutoff in is for outside of the anthropically acceptable range . andit would not apply to anthropically - determined parameters in all possible cases .( for example , if were flat near , but also very high at , anthropic effects would be required to explain why we do not observe the very high value ; but any region within the anthropically acceptable range would be equally probable , so we would not expect to be , so to speak , living on the edge . ) despite these caveats , this is a prediction of sorts , because the naive expectation would probably be for our observation to place us somewhere in the interior of the region of parameter space that is hospitable to life , rather than at the edge .a second sort of general prediction of anthropic reasoning is connected to what might be called cosmic coincidences . " for example , many cosmologists have asked themselves ( and each other ) why the current density in vacuum energy , dark matter , baryons , and neutrinos are all within a couple of orders of magnitude of each other making the universe a much more complicated place than it might be .conventionally , it has been assumed that these coincidences are just that , and follow directly from fundamental physics that we do not understand .but if the anthropic approach to cosmology is really correct ( that is , if it is the real answer to the question of why these densities take the particular values they do ) , then the explanation is quite different : the densities are bound together by the necessity of observers existence , because only certain combinations will do . more explicitly ,suppose several cosmological parameters are governed by completely unrelated physics , so that their individual prior probabilities simply multiply to yield the multidimensional probability distribution .for example , we might have .but even if factors , the hospitality factor will almost certainly not : if galaxies are -objects , the number of galaxies formed at a given will depend on both other parameters , and only certain combinations will give a significant number of observers .thus will likewise have correlations between the different parameters that lead to only particular combinations ( for example those with for a given and ) having high probabilities .the cosmic coincidences would be explained in this way .this anthropic explanation of coincidences , however , should not only apply to things that we have already observed .if it is correct , then it should apply also to _ future _ observations ; that is , we should expect to uncover yet more bizarre coincidences between quantities that seem to follow from quite unrelated physics .how might this actually happen ?consider dark matter .we know fairly precisely how much dark matter there is in the universe , and what its basic properties are .but we have no real idea what it actually is , and there are many , many possible candidates that have been proposed in the literature .in fact , we have no _ observational _ reason to believe that dark matter is one substance at all : in principle it could be equal parts axions , supersymmetric particles , and primordial black holes .the reason most cosmologists do not expect this is that it would be a strange coincidence if three substances involving quite independent physics all wound up with essentially the same density in our universe .but of course this would be just like the suprising - but - true coincidences that hold in already - observed cosmology . in the anthropic approach, these comparable densities could be quite natural .to see why , imagine that there are two _ completely independent _ types of dark matter permeating the ensemble : in each universe , they have some particular densities and out of a wide range of possibilities , so that the densities in a randomly chosen universe ( or around a randomly chosen baryon , etc .) will be given probabilistically by .under these assumptions there is no reason to expect that we should observe based just on these _ a priori _ probabilities .now suppose , though , that picks out a particular narrow range of _ total _ dark matter density as anthropically acceptable .that is , is narrowly peaked about some . in this case, the peak of the probability distribution , which indicates what values a randomly chosen observer should see , will occur where is maximized _ subject to the condition _ that .for simplicity let both prior probabilities be power laws : and .now the coincidence : it is not hard to show that if and , then the maximum will occur when .that is , the two components are likely to have similar densities unless the _ power law indices _ of their probability distributions differ by orders of magnitude .being peaked where is declining , i.e. we should be living _ outside _ the anthropically comfortable range , not just dangerously but downright recklessly .] of course , there are many ways in which this coincidence could fail to occur ( e.g. negative power - law indices , or correlated probabilities ) , but the point is that there is a quite natural set of circumstances in which the components are coincident , even though the _ fundamental _ physics is completely unrelated .the preceding sections should have convinced the reader that there are good reasons for scientists to be very worried if we live in a multiverse : in order to test a multiverse theory in a sound manner , we must perform a fiendishly difficult calculation of , the probability that an -object will reside in a universe characterized by parameters . andbecause of the shortcomings of the shortcuts one may ( and presently must ) take in doing this , almost any particular multiverse prediction is going to be easy to criticize ; only a quite good calculation is going to be at all convincing .much worse , we face an unavoidable and important choice in what should be : a possible universe , or an existing universe , or a universe matching current observations , or a bit of volume , or a baryon , or a galaxy , or an `` observer '' , etc .i find it disturbingly plausible that observers " really are the correct conditionalization object , that their use as such is the correct answer to the measure problem , and that anthropic effects are the real explanation for the values of some parameters ( just as for the local density that we observe ) .many cosmologiest appear to believe that taking the necessity of observers into account is shoddy thinking , and is employed only because it is the _ easy _ way out of solving problems the right " way .but the arguments of this paper suggest that the truth may well be exactly the opposite : the anthropic approach may be the right thing to do in principle , but nearly impossible in practice .nonetheless , we can not do away with multiverses just by wishing them away : we may in fact live in one , whatever the inconvenience to cosmologists .the productive strategy then seems to be one of accepting multiverses as a possibility , and working toward understanding how to calculate the various ingredients necessary to make predictions in one .whether really performing such a calculation will turn out to be possible , but it is certainly impossible if not attempted .even if we can not calculate in the foreseeable future , however , cosmology in a multiverse may not be completely devoid of predictive power .for example , of anthropic effects are at work , they should leave certain clues .first , if we could determine the region of parameter space hospitable to observers , we should find that we are living in the outskirts of the livable region , rather than somewere in its midst .second , if the anthropic effects are the explanation of the parameter values and coincidences between them that we see , then it ought to predict that new coincidences will be observed in future observations .if in the next several decades dark matter is resolved into several equally important components , dark energy is found to be three independent substances , and several other `` cosmic coincidences '' are observed , even someone the most die - hard skeptics might accede that the anthropic approach may have validity why else would be universe be so very baroque ? on the other hand ,if we are essentially finished in defining the basic cosmological constituents , and the defining parameters are in the midst of a relatively large region of parameter space that might arguably support observers , then i think the anthropic approach would lose almost all appeal it has ; we would be forced to ask : why is nt the universe much wierder ?99 m. rees , just six numbers .( new york : basic books , 2000 ) .m. tegmark , arxiv : astro - ph/0410281 .a. d. linde and a. mezhlumian , phys .d * 53 * , 4267 ( 1996 ) [ arxiv : gr - qc/9511058 ] .v. vanchurin , a. vilenkin and s. winitzki , phys .d * 61 * , 083507 ( 2000 ) [ arxiv : gr - qc/9905097 ] .a. h. guth , phys .rept . * 333 * , 555 ( 2000 ) [ arxiv : astro - ph/0002156 ] .j. garriga and a. vilenkin , phys .d * 64 * , 023507 ( 2001 ) [ arxiv : gr - qc/0102090 ] .a. aguirre and m. tegmark , jcap , in press ; arxiv : hep - th/0409072 . n. bostrum , anthropic bias : observation selection effects in science and philosophy ( new york : routledge , 2002 ) . s. w. hawking and t. hertog , phys. rev .d * 66 * , 123509 ( 2002 ) [ arxiv : hep - th/0204212 ] .a. albrecht , arxiv : astro - ph/0210527 .m. dine , arxiv : hep - th/0410201 .j. garriga , m. livio and a. vilenkin , phys .d * 61 * , 023503 ( 2000 ) [ arxiv : astro - ph/9906210 ] . j. garriga and a. vilenkin , phys .d * 61 * , 083502 ( 2000 ) [ arxiv : astro - ph/9908115 ] . j. garriga and a. vilenkin , phys .d * 64 * , 023517 ( 2001 ) [ arxiv : hep - th/0011262 ] .m. tegmark .a. vilenkin , and l. pogosian , arxiv : astro - ph/0304536 .l. pogosian , a. vilenkin and m. tegmark , arxiv : astro - ph/0404497 .m. l. graesser , s. d. h. hsu , a. jenkins and m. b. wise , arxiv : hep - th/0407174 .a. vilenkin , arxiv : gr - qc/9512031 .s. weinberg , phys .d * 61 * , 103505 ( 2000 ) l. smolin , arxiv : hep - th/0407213 . m. tegmark and m. j. rees , astrophys .j. * 499 * , 526 ( 1998 ) s. dimopoulos and s. thomas , phys .b * 573 * , 13 ( 2003 ) [ arxiv : hep - th/0307004 ] . | the notion that there are many `` universes '' with different properties is one answer to the question of `` why is the universe so hospitable to life ? '' this notion also naturally follows from current ideas in eternal inflation and string / m theory . but how do we test such a `` multiverse '' theory : which of the many universes do we compare to ours ? this paper enumerates would would seem to be essential ingredients for making testable predictions , outlines different strategies one might take within this framework , then discusses some of the difficulties and dangers inherent in these approaches . finally , i address the issue of whether there may be some _ general , qualitative _ predictions that multiverse theories might share . |
a web crawler traditionally fulfills two purposes : discovering new pages and refreshing already discovered pages .both of these problems have been extensively investigated over the past decade ( see the survey paper by olston and najork ) .however , recently , the role of the web as a media source became increasingly important as more and more people start to use it as their primary source of up - to - date information .this evolution forces crawlers of web search engines to continuously collect newly created pages as fast as possible , especially high - quality ones . surprisingly , user traffic to many of these newly created pages grows really quickly right after they appear , but lasts only for a few days .for example , it was discussed in several papers that the popularity of news decreases exponentially with time .this observation naturally leads to distinguishing two types of new pages appearing on the web : _ ephemeral _ and _ non - ephemeral _ pages . note that here we do not consider the ephemeral content , which might be removed before it hits the index as in ( e.g. advertisements or the `` quote of the day '' ) , but we consider persistent content that is ephemeral in terms of user interest ( e.g. news , blog and forum posts ) .we clustered user interest patterns of some new pages discovered in one week ( see section [ contentsources ] for details ) , and figure [ fig : yabar ] shows the centroids of the obtained clusters . in section [ contentsources ] , we show that a significant fraction of new pages appearing on the web every day are ephemeral pages .the cost of the time delay between the appearance of such ephemeral new pages and their crawl is thus very high in terms of search engine user satisfaction . moreover , if a crawler fails to find such a page during its period of peak interest , then there might be no need to crawl it at all .it was reported in , that 1 - 2% of user queries are extremely recency sensitive , while even more are also recency sensitive to some extent .the problem of timely finding and crawling ephemeral new pages is thus important , but , to the best of our knowledge , is not studied in the literature . indeed ,different metrics were suggested to measure the coverage and freshness of the crawled corpus , but they do not take into account the degradation of the profit to a search engine contributed by these pages . crawling policies based on such metrics may then crawl such new pages not quickly enough , and even crawl already obsolete content .thus , we need a new quality metric , well thought out for this task , and a crawling algorithm optimized to maximize this metric over time , that takes into account this degradation of pages utility .our daily experience of using the web also suggests that such ephemeral new pages can be found from a relatively small set of `` hubs '' or _ content sources_. we investigate this intuition and show that it is possible and practical to find such sources at scale .examples of content sources are main pages of blogs , news sites , category pages of such news sites ( e.g. politics , economy ) , rss feeds , sitemaps , etc . , and one needs to periodically recrawl such sources in order to find and crawl ephemeral new pages way before their peak of user interest . however, frequent recrawling of all these sources and all new pages found on them requires a huge amount of resources and is quite inefficient . in order to solve this problem efficiently ,we analyze the problem of dividing limited resources between different tasks ( coined as _ holistic crawl ordering _ by oslton and najork in ) , i.e. , here between the task of crawling ephemeral new pages and the task of recrawling content sources in order to discover those new pages . a possible solution for this problemis to give a fixed quota to each policy ( see , e.g. , ) , but we will show that such solutions based on fixed quotas are far from being optimal . in this paper , we propose a new algorithm that dynamically estimates , for each content source , the rate of new links appearance in order to find and crawl newly created pages as they appear . as a matter of fact , it is next to impossible to crawl all these new pages immediately due to resource constraints , therefore , a reasonable crawling policy has to crawl the highest quality pages in priority .the quality of a page can be measured in different ways , and it can , for example , be based on the link structure of the web graph ( e.g. , in - degree or pagerank ) , or on some external signals ( e.g. , query log or the number of times a page was shown in the results of a search engine ) . in this paper , we propose to use the number of clicks in order to estimate the quality of pages , and predict the quality of newly created pages by using the quality of pages previously linked from each content source . by the number of clicks , we mean the number of times a user clicked on a link to this page on a search engine results page ( serp ) , which most reliably indicates a certain level of user interest in the page s content . in this way , we are able , in fact , to incorporate user feedback into the process of crawling for our algorithm to find and crawl the best new pages . to sum up , this paper makes the following contributions : * we formalize the problem of timely crawling of high - quality ephemeral new web content by suggesting to optimize a new quality metric , which measures the ability of a crawing algorithm to solve this specific problem ( section [ formalization ] ) .* we show that most of such ephemeral new content can be found at a small set of content sources , and we propose a method to find such sources ( section [ contentsources ] ) . *we propose a practical algorithm , which periodically recrawls content sources and crawls newly created pages linked from them , as a solution of this problem .this algorithm uses user feedback to estimate the quality of content sources ( section [ algo ] ) .* we validate our algorithm by comparing it to other crawling strategies on real - world data ( section [ exp ] ) . besides , in section [ relatedwork ] , we review related work , while in section [ conclusion ] , we conclude the paper and discuss possible directions for future research .in this section , we formalize the problem under consideration by introducing an appropriate quality metric , which measures the ability of a crawling algorithm to solve this problem . as we discussed in the introduction, we deal with pages for which user interest grows within hours after they appear , but lasts only for several days .the profit of crawling such ephemeral new pages thus decreases dramatically with time .assume that for each page , we know a decreasing function , which is the profit of crawling this page with delay seconds after its creation time ( by profit , one can mean the expected number of clicks or shows on serp ) .if , finally , each page was crawled with a delay , we can define the _ dynamic quality _ of a crawler as : } p_i ( \delta t_i).\ ] ] in other words , the dynamic quality is the average profit gained by a crawler per second in a time window of size . the dynamic quality defined abovecan be useful to understand the influence of daily and weekly trends on the performance of a crawler .let us now define the _ overall quality _ of a crawler , which allows to easily compare different algorithms over larger time windows .it is natural to expect that if is large enough then the influence of season and weekly trends of user interest will be reduced . in other words, the function tends to a constant while increases .thus , we can consider the _ overall quality _ : } p_i ( \vartriangle t_i),\ ] ] which does not depend on and . in this paper , by profit of crawling a page at time , we mean the total number of clicks this page will get on a serp after this time ( ignoring any indexing delay ) . in this way, we can approximate the relevance of a page to current interests of users . from a crawler s perspective, it is thus an appropriate measure of the contribution of new pages to a search engine performance ( given a specific ranking method ) . alternatively , instead of the number of clicks , we could use the number of shows , i.e. , the number of times a page was shown in the top results of a search engine .this value also reflects a crawler s performance in the sense that we only want to crawl ephemeral new pages , which are going to be shown to users .but , as we discuss further in this section , the number of clicks and the number shows behave similarly and we thus use clicks since they reflect the actual preference of users . at this point, we have defined a new metric to measure the quality of crawling ephemeral new pages .we are going to use this metric to validate our algorithm in section [ exp ] .note , that this metric can only be evaluated with some delay because we need to wait when crawled pages are not shown anymore to take their profit , i.e. , their number of clicks , into account .0.5 0.5 however , for our crawling algorithm , we do not want to wait for such a long period of time to be able to use the profit of freshly crawled pages in order to quickly adapt to changes of content sources properties .of course , we do not know the function for pages that just appeared , but we can try to predict it .it is natural to expect that pages of similar nature exhibit similar distributions of user interest . in order to demonstrate this , on figure [ fig : avg_shows ] and figure [ fig : avg_clicks ] we plot respectively the average number of cumulative shows and clicks depending on the page s age for all pages published on a news site and a blog ( both being randomly chosen ) over week .we can see that almost all clicks and shows appear in the first week of a page s life , and that the dependency of the cumulative number of clicks ( shows ) gathered by a page on the page s age is pretty well described by the function : , where is the total number of clicks ( shows ) a page gathers during its life .we thus propose the following approximation of the profit ( i.e. , the number of future clicks ) : where the _ rate of decay _ and the _ profit _ are content - source - specific and should be estimated using historical data ( see section [ p&mu ] for details ) .we use this approximation in section [ algo ] in order to analyze the problem under consideration theoretically .[ howtofind ] in this section , we show that most ephemeral new content can indeed be found at a small set of content sources and then describe a simple procedure for finding such a set , that fits our use case .our hypothesis is that one can find the most of ephemeral new pages appearing on the web at a small set of content sources , but links from these sources to new pages are short living so a crawler needs to frequently recrawl these sources to avoid missing links to new pages , especially to high - quality pages . in order to validate this hypothesis about content sources ,we need to follow the evolution over time of the link structure of the web , to understand which content sources refer which new pages as they appear . our web crawler logs could be used for this , but there are two main issues with this approach : 1 ) keeping the full history of new pages linked from each content source , even for some small time period , is impractical due to resource constraints ; and 2 ) existing crawlers do not revisit each content source often enough to provide more than a really partial view of the evolution of the link structure of the web .so , instead , we used the toolbar logs continuously collected at yandex ( russia s most popular search engine ) to monitor user visits to web pages . in this way, we can easily track the appearance of new pages that are of interest to at least one user , know which content sources referred them , and also follow the evolution of user interest over time for each of these new pages .this data is a representative sample of the web as this toolbar is used by millions of people across different countries .but we can not use this data in the algorithm itself since it is not available in all countries , and thus we use it only in order to validate our hypothesis about the existence of relatively small set of content sources . using this toolbar data , we randomly sampled 50k pages from the set of new pages that appeared over a period of one week and were visited by at least one user .these pages were distributed over different hosts . for each page, we computed its number of daily visits for a period of two weeks after it appeared .then , using this 14-dimensional ( one per day ) feature vector ( scaled to have its maximum value equal to 1 ) , we clustered these pages into 6 clusters by applying the k - means method .let us note that when we tried less clusters , non - ephemeral pages were not assigned to one cluster .finally , we obtained only % non - ephemeral pages .the percentage of new pages that are ephemeral ( and were visited at least once ) for this week is thus 96% , which is really significant .centroids of these clusters are plotted on figure [ fig : centroids-6 ] ( in section [ intro ] we showed only two of them ) . our toolbar logs also contain , in most cases , the referrer page for each recorded visit to a page , i.e. , the source page from which the user came to visit this target page .we extracted all these _ links _ ( users transitions between pages ) found in the logs pointing to one of the ephemeral new pages in our sample over the same period of one week plus several days ( for the most recent pages to become obsolete ) , and obtained links . using these links , we studied the percentage of ephemeral new pages reached depending on the number of content sources .we want to find the smallest set of content sources that allows to reach the most of new pages and thus proceed as follows ( in a greedy manner ) .we first take the source page , which allows to reach the most of pages , then remove it and all covered pages , then select the second one in the same way , and so on .we see on figure [ fig : hubs ] , that only 3k content sources are required to cover 80% of new content , which validates our hypothesis about content sources .interestingly , 42% of these 3k content sources are main pages of web sites , while 44% are category pages , which can be accessed from the main page .so , overall , 86% of them are at most 1 hop away from the main page .now , we need to understand how to effectively find these content sources at scale without relying on toolbar logs , which , as said , are not available world - wide .our crawling algorithm ( described in the next section ) focuses on high - quality ephemeral new content .therefore , even if content sources that produce low quality content or almost no new content at all are given to this algorithm , they will almost never be crawled or crawled just in case much later when some spare resources are available ( see section [ algo ] ) .we are thus , to some extent , only interested in recall when finding such sources here , i.e. , to get most of them , which makes this task much easier to solve in practice .analyzing the set of content sources discovered using toolbar data , we noticed that 86% of content sources that we found are actually at most 1 hop away from the main page of their host , as said .the following procedure will thus yield a relatively small set of content sources that generate most of the ephemeral new content on a given set of hosts . 1 .crawl the main page of each host ( once ) and keep pages linked from it ( all pages 1-hop away from the main page ) ; 2 . select the main page , and all found pages older than few days , by using historical data , as content sources ( as new pages are almost never content sources ) .let us note that this procedure is easy to run periodically to refresh the set of content sources , and to find new content sources .it is possible to use url patterns as described in in order to get a better precision , but this optimization is not required here , because our crawling algorithm optimizes precision itself by avoiding to crawl low - quality content sources ( see section [ algo ] ) . the input list of hosts for this procedure can be obtained during a standard web crawling routine by ranking found hosts by their tendency to generate new content .this simple method also fits our usage scenario considering that , as said , only recall is important for us when finding content sources as input for our algorithm .in this section , we assume that we are given a relatively small set of content sources , which regularly generate new content ( see section [ howtofind ] for the procedure to select such a set ) .our current aims are to ( 1 ) find an optimal schedule to recrawl content sources in order to quickly discover high - quality ephemeral new pages , and ( 2 ) understand how to spread resources between crawling new pages and recrawling content sources .first , we analyze this problem theoretically and find an optimal solution .then , we describe an algorithm , which is based on this solution .assume that we are given a set of content sources .note that the rate of new content appearance may differ from source to source .for example , usually there are much more news about politics than about art , and therefore , different categories of a news site generate new content with different rates .let be _ the rate of new links appearance _ on the source , i.e. , the average number of links to new pages , which appear in one second .let us consider an algorithm , which recrawls each source every seconds , discovers links to new pages , and also crawls all new pages found .we want to find a schedule for recrawling content sources , which maximizes the overall quality ( see equation ( [ q ] ) ) , i.e. , our aim is to find optimal values of .suppose that our infrastructure allows us to crawl pages per second ( can be non - integer ) .due to these resource constraints , we have the following restriction : on average , the number of new links linked from a source is equal to , therefore every seconds we have to crawl pages ( the source itself and all new pages found ) .obviously , the optimal solution requires to spend all the resources : and we want to maximize the overall quality ( see equation ( [ q ] ) ) , i.e. , } p_j(\vartriangle t_j ) \rightarrow \max.\ ] ] note that this expression is exactly the average profit per second .content sources may not be equal in quality , i.e. , some content sources may provide users with better content than others .we now assume that , on average , the pages from one content source exhibit the same behavior of the profit decay function and hence substitute by the approximation discussed in section [ formalization ] .we treat the profit and the rate of profit decay as the parameters of each content source .thus , we obtain : here and . without loss of generality, we can assume that .we now want to maximize subject to ( [ restriction ] ) .we use the method of lagrange multipliers : where is a lagrange multiplier .note that , so we get : the function increases monotonically for with and .if we are given , then we can find the unique value as shown in figure [ fig : omega ] .one can easily compute using a binary search algorithm .note that bigger values of lead to bigger values of .that is why is a monotonic function of and we can , here also , apply a binary search algorithm ( see algorithm [ algo_disjdecomp ] ) to achieve the condition . let and be , respectively , the lower and upper bounds for .at the first step , we can put and .indeed , is the obvious upper bound for , since in this case we do not crawl any content source . at each step of the algorithm, we consider .for this value of , we recompute intervals .note that if we get for some , then and we never recrawl this content source .after that , if , then we can put , since it is an upper bound .if not , we put .we proceed in this way until we reach the required precision .the value of may be interpreted as the threshold we apply to content sources utility .actually , we can find the minimal crawl rate required for the optimal crawling policy not to completely refuse to crawl content sources with the least utility .we completely solved the optimization problem for the metric suggested in section [ formalization ] , the solution of ( [ solution2 ] ) is theoretically optimal ( we use the name _ echo - based crawler _ for the obtained algorithm , where echo is an abbreviation for ephemeral content holistic ordering ) . however , some further efforts are required in order to make this algorithm practically useful .there are parameters , which we need to estimate for each source : the profit , the rate of profit decay , and the rate of new links appearance . in the following section, we describe a practical crawling algorithm , which is based on our theoretical results .let us describe a concrete algorithm based on the results of section [ theory ] .first , we use the results from section [ howtofind ] to obtain an input set of content sources .then , in order to apply algorithm [ algo_disjdecomp ] for finding an optimal recrawl schedule for these content sources , we need to know for each source its profit , the rate of profit decay , and the rate of new links appearance .we propose to estimate all these values dynamically using the crawling history and search engine logs .since these parameters are constantly changing , we need to periodically re - estimate time intervals ( see algorithm [ algo_disjdecomp ] ) , i.e. , to update the crawling schedule . obviously , the more often we re - estimate , the better results we will obtain , and the choice of this period depends on the computational resources available . thus , we first discuss how to estimate these sources characteristics ( sections [ p&mu ] and [ lambda ] ) and then how to deal with deviations of content sources behavior from our idealistic assumptions to make a practical scheduling algorithm ( section [ scheduling ] ) .for this part , we need search engine logs to analyze the history of clicks on new pages . we want to approximate the average cumulative number of clicks depending on the page s age by an exponential function .this approximation for two chosen content sources is shown on figure [ fig : avg_clicks ] .let us consider a cumulative histogram of all clicks for all new pages linked from a content source , with the histogram bin size equals to minutes .let be the number of times all new pages linked from this content source were clicked during the first minutes after they appeared .so , is the average number of times a new page was clicked during the first minutes .we can now use the least squares method , i.e. , we need to find : in other words , we want to find the values of and , that minimize the sum of the squares of the differences between the average cumulative number of clicks and its approximation .it is hard to find an analytical solution of ( [ leastsquares ] ) , but we can use the gradient descent method to solve it : [ gradientdescent ] ; from the production point of view , it is very important to decide , how often to push data from search engine logs to re - estimate the values of and as it is quite an expensive operation .we denote this _ logs push period _ by . in section[ exp ] , we analyze how the choice of affects the performance of the algorithm .the rate of new links appearance may change during the day or during the week .we thus dynamically estimate this rate for each content source . in order to do this ,we use historical data : we consider the number of new links found at each content source during the last crawls .we analyze how different values for affect the performance of the algorithm in section [ exp ] .finally , in order to apply our algorithm , we should solve the following problem : in reality the number of new links that appear on a content source during a fixed time period is random and we can not guarantee that we find exactly new links after each crawl .we can find more links than expected after some recrawl and if we crawl all of them , then we will deviate from the schedule .therefore , we can not both stick to the schedule for the content sources and crawl all new pages .so we propose the two following variants to deal with these new pages , that we can not crawl without deviating from the schedule .* echo - newpages .* in order to avoid missing clicks , we always crawl newly discovered pages right after finding them .if there are no any new pages in the crawl frontier , we try to come back to the schedule .we crawl the content source , which is most behind the schedule , i.e. , with the highest value of , where is time passed after the last crawl of the -th content source . ** echo - schedule.**we always crawl content sources with intervals and when we have some resources to crawl new pages , we crawl them ( most recently discovered first ) .we compare these two variants experimentally in the next section .we finish this section by presenting a possible production architecture for our algorithm ( see figure [ fig : architecture ] ) to emphasize that it is highly practical to implement it in a production system .initially , a set of content sources with good recall of new pages linked from these sources is created using the procedure described in section [ contentsources ] .this procedure must be run periodically to refresh this set and include new content sources .then , _ scheduler _ finds the optimal crawling schedule for content sources , while _ fetcher _ crawls these sources according to this schedule ._ scheduler _ dynamically estimates the rate of new links appearance for content sources and also estimates the profit decay function using the number of clicks from the search engine logs .given a fixed ranking method , the number of clicks measures the direct effect of the crawling algorithm on the search engine s performance ._ scheduler _ dynamically uses this feedback in order to improve the crawling policy .in this section , we compare our algorithm with some other crawling algorithms on real - world data . since it is impractical and unnecessary to conduct research experiments at a production scale , we selected some sites that provide a representative sample of the web , on which we performed our experiments .we selected the top 100 most visited russian news sites and the top 50 most visited russian blogs using publicly available data from trusted sources .we consider these web - sites to be a representative sample of the web for our task as they produce 5 - 6% out of the new pages ( visited by at least one user ) that appear in this country daily ( we estimated this second value using toolbar logs ) . for each such site, we applied the procedure described in section [ contentsources ] and obtained about 3k content sources .then , we crawled each of these content sources every 10 minutes for a period of 3 weeks ( which is frequent enough to be able to collect all new content appearing on them before it disappears ) .the discovery time of new pages we observed is thus at most delayed by these 10 minutes .we considered all pages found at the first crawl of each source ( each content source was crawled times ) to be old and discovered new pages during these weeks . keeping track ofwhen links to new pages were added and deleted from the content sources , we created a dynamic graph that we use in the following experiments .this graph contains m unique links . additionally , we used search engine logs of a major search engine to collect user clicks for each of the newly discovered pages in our dataset for the same period of weeks plus week for the most recent pages to become obsolete .we observed that % of the pages were clicked at least once during this 4 weeks period .we compare the algorithm suggested in section [ algo ] with several other algorithms .there are no state - of - the - art algorithms for the specific task we discuss in this paper , but one can think of several natural ones : * * breadth - first search ( bfs ) * we crawl content sources sequentially in some fixed random order . after crawling each source ,we crawl all new pages linked from this source , which have not been crawled yet .we also compare our algorithm with the following simplifications to understand the importance of 1 ) the holistic crawl ordering and 2 ) the usage of clicks from search engine logs . ** fixed - quota * this algorithm is similar to _echo- + schedule _ , but we use a fixed quota of for recrawling content sources and for crawling new pages that have not been crawled before . * * frequency * this algorithm is also similar to _ echo - schedule _ , but we do not use clicks from search engine logs , i.e. , all content sources have the same quality and content sources are ordered only by their frequency of new pages appearance .we also propose a simplification of our algorithm , based on section [ algo ] , which could be much easier to implement in a production system . * * echo - greedy * we crawl the content source with the highest expected profit , i.e. , with the highest value of , where is the time passed since the last crawl of the content source , is its rate of new links appearance , and is the average profit of new pages linked from the content source . then , we crawl all new pages linked from this source , which have not been crawled yet , and repeat this process . in this section ,we experimentally investigate the influence of parameters on our algorithm s performance and compare the algorithm with the approaches from section [ benchmarks ] on real - world data .we simulated , for each algorithm , the crawl of the dynamic graph described in section [ data ] , using the content sources as seed pages .each algorithm can thus , at each step , decide to either crawl a newly discovered page or to recrawl a content source in order to find new pages . in the following experiments analyzing parameters influence , we used the crawl rate per second .this crawl rate is enough to crawl a significant fraction of the new pages as shown on figure [ fig : lambda ] , but is not too high to let bfs algorithm crawl all new pages ( which is highly unrealistic in a production context ) .we then also use two other crawl rates and per second to investigate the influence of this value .we apply algorithm [ algo_disjdecomp ] to re - estimate values every 30 minutes , which is frequent enough so that smaller intervals have almost no influence on its performance , and which is also realistic in a production context .we also set the bin size used in algorithm 2 to 20 minutes , which is good enough to have robust estimations of and , as , typically , the profit decay function does not change significantly in such a small time period .we do not study in details , here , the influence of these two parameters due to space constraints as , according to the experiments we performed , it is negligible , for values below these realistic choices , in comparison with other parameters . besides that, we need default values for profits as we start crawling without knowing anything about the quality of each content source .please , note that we need to use _pessimistic _ default values because we want to avoid crawling low quality sources too frequently , while we do not have enough feedback to have precise estimations .we can not use as according to algorithm [ algo_disjdecomp ] , we do not crawl content sources with zero profit , so , we used some small non - zero value .we compared the two variants of echo - based crawler from section [ scheduling ] with different values for : 1 ) the crawl history size used to estimate the rate of new links appearance discussed in section [ lambda ] ( from 3 to 10 crawls ) , and 2 ) the logs push period , which was described in section [ p&mu ] ( we considered 1h , 12h , 24h , and 1 week ) . interestingly , for both variants we noticed no difference , and we therefore conclude that these parameters do not affect the final quality of the algorithm in our setup . on the other hand ,the logs push period has a really big influence during the warm - up period and the smaller the logs push period , the better the results ( see figure [ fig : profit_7 ] ) .there is nothing interesting to observe for the crawl history size .let us also note that the optimal schedule of echo - based algorithms almost does not recrawl 70% of content sources , which means that it does not spend much resources on low quality content sources ..average dynamic profit for a 1-week window . [ cols="<,^,^,^,^",options="header " , ] then , we took the crawl history of size 7 and the logs push period of 1 hour ( randomly , following the discussion in section [ influence ] ) , and compared echo - based crawlers with other algorithms on three different crawl rates . in order to compare our algorithms during the last week of our observations ( after the warm - up period ) we measured the dynamic profit every two minutes using a time window of one week ( enough to compensate daily trends ) .table [ table1 ] shows average values and their standard deviations .note that we also include the upper bound of algorithms performance that we computed using bfs algorithm with an unbounded amount of resources , which allows to crawl all new pages right after they appear .this upper bound therefore does not depend on the crawl rate and equals of profit per second .echo - newpages shows the best results , which are really close to the upper bound , although the crawl rate used is much smaller than the rate of new links appearance .this means that our algorithm effectively spends its resources and crawls highest quality pages first .note that the smallest crawl rate that allows bfs to reach 99% of the upper bound is 1 per second ( this value is measured , but not present in the table ) , as bfs wastes lots of resources recrawling content sources to find new pages , while echo - newpage and echo - schedule reach this bound with crawl rate 0.2 per second ( see the last column of the table ) . note that the profit of echo - greedy is also high .this fact can be a good motivation for using it in a production system , where ease of implementation is a strong requirement ( as it is much easier to implement ) .first , it only requires a priority queue of content sources rather than a recrawl schedule updated using the binary search method from algorithm [ algo_disjdecomp ] .second , it does not use , so is thus simply the average number of clicks on pages linked from the -th content source , and can therefore be computed easier than by using the gradient descent method from algorithm 2 .let us show a representative example ( at ) demonstrating the advantage of echo - based algorithms over the baselines ( see figure [ fig : dynamic ] ) .one can observe that echo - based algorithms perform the best most of the time .it is interesting to note though that during the night bfs shows better results .it happens as bfs is `` catching up '' by crawling pages , which were crawled by other algorithms earlier .this follows how the dynamic profit is defined : we take into account the profit of the pages , which were crawled during the last 5 hours .we also see that the algorithm with fixed quota for crawl and recrawl perform well during the weekend because less new pages appear during this period and the crawl rate we use is thus enough to crawl practically all good content without additional optimizations .most papers on crawling are devoted to discovering new pages or refreshing already discovered pages . both of these directions are , to some extent , related to the problem we deal with , though can not serve as solutions to it .[ [ refresh - policies ] ] refresh policies + + + + + + + + + + + + + + + + the purpose of refresh policies is to recrawl known pages that have changed in order to keep a search engine s index fresh . usually , such policies are based on some model , which predicts changes on web pages . in pioneering works , the analysis of pages changes was made in the assumption of a time - homogeneous poisson process , i.e. , it was assumed that the pages change rate does not depend on time .however , in , it was noted that there are daily and weekly trends in the pages change rate .then , a history - based estimator , which takes such trends into account , was proposed in . a more sophisticated approach based on machine learningis used in , where the page s content , the degree of observed changes and other features are taken into account . for our specific task ,refresh policies can be used to find links to ephemeral new pages , that appeared on already known pages ( content sources ) .so , pages changes are relevant for us only if new links to such new pages can be found .interestingly , this simplifies the estimation of pages change rate as one can easily understand , given two successive snapshots of a page , that two new links appeared , while it is much harder to know if the page s text changed once or twice .this fact allows us to use a simple estimator for the rate of new links appearance , which reflects timely trends .of course , more sophisticated methods ( e.g. , using machine learning ) , can be applied here , but it was out of focus of the current paper .moreover , our method actually monitors content sources updates to avoid missing new links . in this way , our work is more related to the problem of creating an efficient rss feeds reader , which needs to monitor rss feeds updates to avoid missing new postings .the rss reader described in these papers learns the general posting pattern of each rss feed to create an efficient scheduling algorithm that optimizes the retrieval of rss feeds in order to provide timely content to users .this rss reader uses a short description from the rss feed , when presenting an article to users , and there is thus no need for it to crawl these articles .it is not exactly our case as we need to crawl and index newly discovered pages to allow users to access them via the search engine .thus , although rss monitoring policies are somehow similar to the problem of finding ephemeral new pages on content sources , one can not use such policies out of box for our task .the main reason is that we need to spend significant amount of resources for crawling new pages and , if we want to do this in an efficient way , then we need to change the recrawl schedule to take this fact into account. however , rss feeds themselves can be used in our approach as content sources .[ [ discovery - policies ] ] discovery policies + + + + + + + + + + + + + + + + + + the main idea behind discovery policies is to focus breadth - first search on high quality content , i.e. , to prioritize discovered , but not yet crawled pages ( the crawl frontier ) according to some quality measure .some approaches are based on the link structure of the web graph , e.g. , in pages with the largest number of incoming links are crawled first , while pages with largest pagerank are prioritized in . in ,such approaches were compared in their impact on web search effectiveness . however , as pandey and olston discussed in , the correlation between link - based importance measures and user interest is weak , and hence they proposed to use search engine logs of user queries to drive the crawler towards pages with higher potential to be interesting for users . in turn , we follow recent trends and do not rely on the link structure of the web , but use clicks from search engine logs .our crawler discovers and crawls new pages , but there is a principal difference between our approach and previous ones .previous approaches are based on the assumption that the web is relatively deep and therefore , starting from some seed pages , a crawler needs to go deeper , in direction of high - quality pages if possible , to find new pages .we instead argue that one can find most ephemeral new pages that are appearing on the web at a relatively small set of content sources , but that a crawler needs to frequently recrawl these sources to avoid missing short living links to new pages .this observation , on one hand , simplifies the problem but , one the other hand , introduces new challenges to find the right balance between crawling new pages and recrawling content sources .[ [ holistic - crawl - ordering ] ] holistic crawl ordering + + + + + + + + + + + + + + + + + + + + + + + usually , papers about crawling focus either on discovery of new pages or on refreshing already known pages , but the important question of how to divide limited resources between refreshing and discovery policies is usually underestimated .some authors proposed to give a fixed quota to each policy .however , as it follows , e.g. , from our analysis ( see section [ theory ] ) , such fixed quotas can be far from optimal .in contrast , our optimization framework simultaneously deals with refreshing and discovery and can thus find an optimal way to share resources .moreover , the problem of making a holistic crawl ordering , i.e. , to unify different policies into a unified strategy , was proposed by oslton and najork as a future direction in their extensive survey on web crawling and we tried to make a step forward in this direction .to the best of our knowledge , the problem of timely and holistic crawling of ephemeral new content is novel . in this paper , we introduce the notion of ephemeral new pages , i.e. , pages that exhibit the user interest pattern shown on figure [ fig : yabar ] , and emphasize the importance of this problem by showing that a significant fraction of the new pages that are appearing on the web are ephemeral .we formalized this problem by proposing to optimize a new quality metric , which measures the ability of an algorithm to solve this specific problem .we showed that most of the ephemeral new content can be found at a relatively small set of content sources and suggested an algorithm for finding such a set .then , we proposed a practical algorithm , which periodically recrawls content sources and crawls newly created pages linked from them as a solution of this problem .this algorithm estimates the quality of content sources using user feedback . finally , we compared this algorithm with other crawling strategies on real - world data and demonstrated that the suggested algorithm shows the best results according to our metric .our theoretical and experimental analysis aims at giving a better insight into the current challenges in crawling the web . in this paper, we predict the expected profit of a new page using two features : the time when this page was discovered by a crawler , and the content source where a link to this page was found .the natural next step , which we leave for future work , is to predict this profit using more features , e.g. , give a higher priority to pages having an anchor text related to the current trends in user queries like in , or to the pages with more incoming links .also , url tokens and its hyperlink context ( anchor text , surrounding text , etc . ) may be useful for such prediction .this will help to prioritize new pages with seemingly higher quality found on the same content source at the same time .brewington , b.e . ,cybenko , g. : how dynamic is the web ?computer networks , vol .33(16 ) , 257276 , 2000 .brewington , b.e . ,cybenko , g. : keeping up with the changing web .computer , vol .33(5 ) , 5258 , 2000 . cho , j. , garcia - molina , h. : effective page refresh policies for web crawlers .acm transactions on database systems , vol .28(4 ) , 2003 .kumar , r. , lang , k. , marlow , c. , tomkins , a. : efficient discovery of authoritative resources . in data engineering , 2008 .liu , m. , cai , r. , zhang , m. , zhang , l. : user browsing behavior - driven web crawling . in proc .cikm conference , 2011 . | nowadays , more and more people use the web as their primary source of up - to - date information . in this context , fast crawling and indexing of newly created web pages has become crucial for search engines , especially because user traffic to a significant fraction of these new pages ( like news , blog and forum posts ) grows really quickly right after they appear , but lasts only for several days . in this paper , we study the problem of timely finding and crawling of such _ ephemeral _ new pages ( in terms of user interest ) . traditional crawling policies do not give any particular priority to such pages and may thus crawl them not quickly enough , and even crawl already obsolete content . we thus propose a new metric , well thought out for this task , which takes into account the decrease of user interest for ephemeral pages over time . we show that most ephemeral new pages can be found at a relatively small set of content sources and present a procedure for finding such a set . our idea is to periodically recrawl content sources and crawl newly created pages linked from them , focusing on high - quality ( in terms of user interest ) content . one of the main difficulties here is to divide resources between these two activities in an efficient way . we find the adaptive balance between crawls and recrawls by maximizing the proposed metric . further , we incorporate search engine click logs to give our crawler an insight about the current user demands . efficiency of our approach is finally demonstrated experimentally on real - world data . |
because the dynamics of financial markets are of great importance in economics and econophysics , the dynamics of both stock price and trading volume have been studied for decades as a prerequisite to developing effective investment strategies .econophysics research has found that the distribution of stock price returns exhibits power - law tails and that the price volatility time series has long - term power - law correlations . to better understand these scaling features and correlations , yamasaki _ et al ._ and wang _ et al . _ studied the behavior of price return intervals between volatilities occurring above a given threshold . for both daily and intraday financial records, they found that ( i ) the distribution of the scaled price interval can be approximated by a stretched exponential function , and ( ii ) the sequence of the price return intervals has a long term memory related to the original volatility sequence .the scaling and memory properties of financial records are similar to those found in climate and earthquake data .a feature of the recent history of the stock market has been large price movements associated with high volume . in the black monday stock market crash of 1987 ,the dow jones industrials average ( djia ) plummeted 508 points , losing 22.6 percent of its value in one day , which led to the pathological situation in which the bid price for a stock actually exceeded the ask price . in this financial crashapproximately shares traded , a one - day trading volume three times that of the entire week previous .understanding the precise relationship between price and volume fluctuations has thus been a topic of great interest in recent research .trading volume data in itself contains much information about market dynamics , e.g. , the distribution of the daily traded volume displays power - law tails with an exponent within the lvy stable domain .recently , ren and zhou studied the intraday database of two composite indices and 20 individual indices in the chinese stock markets .they found that the intraday volume recurrence intervals show a power - law scaling , short - term correlations and long - term correlations in each stock index . in this studywe analyze u.s .stock market data over a range broad enough to allow us to identify how several financial factors significantly affect scaling properties .we study the daily trading volume volatility return intervals between two successive volume volatilities above a certain threshold , and find a range of power - law distributions broader than that found earlier in price volatility return intervals .we find a unique scaling of the probability density function ( pdf ) for different thresholds .we also perform a detailed analysis of the relation between volume volatility return intervals and four financial stock factors : ( i ) stock lifetime , ( ii ) market capitalization , ( iii ) average trading volume , and ( iv ) average trading value .we find systematically different power - law exponents for when binning stocks according to these four financial factors .similar to that found for the chinese market , we find that in the u.s .stock market the conditional probability distribution , for following a certain interval , demonstrates that volume return intervals are short - term correlated .we also find that the daily volume volatility shows a stronger long - term correlation for sequences of longer lifetime but no clear changes in long - term correlations for different stock size factors such as capitalization , volume , and trading value .in order to obtain a sufficiently long time series , we analyze the daily trading volume volatility of 17,197 stocks listed in the u.s .stock market for at least 350 days .we obtain our data from the center for research in security prices ( crsp ) us stock database , which lists the daily prices of all listed nyse , amex , and nasdaq common stocks , along with basic market indices .the period we study extends from 1 january 1989 to 31 december 2008 , a total of 5042 trading days .for a stock trading volume time series , in a manner similar to stock price analysis , we define two basic measures : volume return and volume volatility .the volume return is defined as the logarithmic change in the successive daily trading volume for each stock , where is the daily trading volume at time .we define volume volatility to be the absolute value of the volume return . in order to compare different stocks , we determine the volume volatility by dividing the absolute returns by their standard deviation , where is the time average for each stock .the threshold is thus measured in units of standard deviation of absolute volume return . for a volume volatility time series, we collect the time intervals between consecutive volatilities above a chosen threshold and construct a new time series of volume return intervals . fig . [ fig1](a ) shows the dependence of on , where is the pdf of the volume volatility return interval time series .obviously , decays more slowly for large than for small . for large , has a higher probability of having large interval values because extreme events are rare in a high threshold series .we next determine whether there is there any scaling in the distribution by plotting the pdfs of the volume return intervals , scaled with the mean volume return interval , for different thresholds in fig .[ fig1](b ) .we can see that all five threshold values curves ( full symbols ) callapse onto a single curve , suggesting the existence of a scaling relation , as the threshold increases , the curve ( rare events ) tends to be truncated due to the limited size of the dataset .the tails of the scaling function can be approximated by a power - law function as shown by the dashed line in fig .[ fig1](b ) , where the tail exponent is .the exponent of the scaled pdfs for is by the least square method , which is the same as the unscaled pdf exponent as shown in fig .[ fig1](a ) .the power - law exponents for intraday volume recurrence intervals of several chinese stock indices are from to .our exponents are larger than those in the chinese stock markets. this might be due to differing definitions of volume volatility . in ref . , the volume volatility is defined as intraday volume divided by the average volume at one specific minute of the trading day averaged over all trading days . herewe define the volume volatility to be the logarithmic change in the successive daily volumes [ eqs .[ return.eq ] and [ volatility.eq ] ] . for comparison , and using the same approach , fig .[ fig1](c ) and fig .[ fig1](d ) show the analogous results for price volatilities ( see also the studies in refs .note that it is not easy to distinguish between a stretched exponential and a power - law when studying price volatilities , i.e. , the power - law range is small and a stretched exponential could also provide a good fit .in contrast , the pdfs of the volume volatility return intervals display a wide range of power - law tails , which differs from the stretched exponential tail apparent in the price return intervals . our results for volume volatility may suggest that for price volatility is also a power - law , but this could not be verified because the range of the observed power - law regime [ see figs .[ fig1](c ) and [ fig1](d ) ] is more limited than the broad range of scales seen in the volume volatility [ figs .[ fig1](a ) and [ fig1](b ) ] .the difference between the power - law and stretched exponential behavior of may be related to the existence or non - existence respectively of non - linearity represented in the multifractality of the time series . when non - linear correlations appear in a time record , bugachev _ et al . _ showed that is a power - law . on the other hand , when non - linear correlations do not exist andonly linear correlation exists , bunde __ found stretched exponential behavior .a comparison with the shuffled records allows us to see how the empirical records differ from randomized records .we shuffle the volume volatility time series to make a new uncorrelated sequence of volatility , and then collect the time intervals above a given threshold to obtain synthetic random control records .the curve that fits the shuffled records [ the open symbols in fig .[ fig1](b ) ] is an exponential function , , and forms a possion distribution .a poisson distribution indicates no correlation in shuffled volatility data , but the empirical records suggest strong correlations in the volatility .we study the relation between the scaled pdfs as a function of for four financial factors : ( a ) stock lifetime , ( b ) market capitalization , ( c ) mean volume , and ( d ) mean trading value for threshold . for higher values , we do not have sufficient data for conclusive results . in fig .[ fig2 ] , we plot the scaled pdfs for these four factors .the volume return intervals characterize the distribution of large volume movements . a high probability of having a large volume return interval suggests a correlation in volume volatility ,because small volatilities are followed by small volatilities and the time interval between the two large volatilities becomes relatively longer than those of random records . in order to charaterize how these four factors affect the distribution of volume return intervals, we divide all stocks into four bins for each factor . in fig .[ fig2](a ) , the probability that will be large is greater in the bin with 15 year old stocks ( triangles ) than in the bins of younger stock .this indicates that small volatilities ( below the threshold ) tend to follow small volatilities and that the time intervals between large volatilities in the bin of 15 year - old stocks are larger than the time intervals in the bin of 5 years old stocks ( dots ) .this also suggests that the volume volatility time records of older stocks are more auto - correlated than those of younger stocks .the decaying parameters represented by the power - law exponents are quite different : for the shortest lifetime bin and for the longest lifetime bin. this significant difference might be caused by differences in autocorrelation in these series . in figs .[ fig2](b ) , [ fig2](c ) , and [ fig2](d ) , we show a similar tendencies for stock bins with different capitalizations , mean volumes , and mean trading values . trading value is defined as stock price multiplied by transaction volume . for each stock ,we designate the lifetime average of capitalization , volume , and trading value as performance indices .for example , the power - law exponents of the pdfs , , increase as the capitalization becomes larger [ see fig . [ fig2](b ) ] . to clarify the picture, we divide all stocks into different subsets and study the behavior of the power - law exponent with regard to these four factors . in fig .[ fig3](a ) , stocks are sorted into 10 subsets , from 508 days ( 2 years ) to 5080 days ( 10 years ) .we fit the power - law tails of the volume return intervals for each subset and plot the exponent versus the lifetime of the stocks . in fig .[ fig3](a ) , we can observe a systematic trend with stock lifetime .it is seen that a smaller exponent which indicates a stronger correlation in older stock subsets .similarly , we sort the stocks by capitalization , mean volume , and mean trading value , as shown in figs .[ fig3](b ) , [ fig3](c ) , and [ fig3](d ) .it is seen that decreases with increasing of all these three factors but seem to become constant for large values of capitalizations , mean volumes and mean trading values .since all factors similarly affect the scaling of the pdf , , we now determine how much these factors are correlated .to study the relations between different stock bins , we plot the relation between trading value versus capitalization , mean volume versus capitalization , and mean trading value versus mean volume for all the stocks shown in fig .we see that larger capitalization stocks tend to have a larger trading volume and a larger trading value , which is consistent with figs . [ fig1](b ) , [ fig1](c ) , and [ fig1](d ) .the correlation coefficients between trading value and capitalization , mean volume and capitalization , and trading value and volume are 0.62 , and 0.55 , and 0.78 , respectively .the correlation coefficients are high because these capitalization , volume , and trading value factors are all affected by firm size .our analyses do not , however , show a significant relationship between stock lifetime and its trading value , capitalization , and mean volume , and the correlation coefficients are all .we characterize a sequence of volume return intervals in terms of the autocorrelations in the time series .if the volume return intervals series are uncorrelated and independent of each other , their sequences can be determined only by the probability distribution . on the other hand ,if the series is auto - correlated , the preceding value will have a memory effect on the values following in the sequence of volume volatility return intervals . in order to investigate whether short - term memory is present , we study the conditional pdf , , which is the probability of finding a volume return interval immediately after an interval of size . in records without memory , should be identical to and independent of .otherwise , should depend on . because the statistics for of a single stock are of poor quality, we study for a range of .the entire dataset is partitioned into eight equal - sized subsets , , with intervals of increasing size .figure [ fig5 ] shows the pdfs for , i.e. , small interval size and large interval size for different .the probability of finding large is larger in ( open symbols ) than in ( full symbols ) , while the probability of finding small is larger in than that in .thus large tends to be followed by large , and vice versa , which indicates short - term memory in the volume return intervals sequence .moreover , note that in the same subset for different thresholds fall onto a single curve , which indicates the existence of a unique scaling for the conditional pdfs as well .similar results were found for the volume volatility of the chinese markets and for price volatilities .in previous studies , the price volatility series was shown to have long - term correlations . using a similar approach, we test whether the volume volatility sequence also possesses long - term correlations . to answer this question, we employ the detrended fluctuation analysis ( dfa ) method to further reveal memory effects in the volume volatility series . using the dfa method ,we divide an integrated time series into boxes of equal length and fit a least squares line in each box .next we compute the root - mean - square fluctuation of the detrended time series within a window of points and determine the correlation exponent from the scaling function , where $ ] .the correlation exponent characterizes the autocorrelation in the sequence .the time series has a long - term memory and a positive correlation if the exponent factor , indicating that large values tend to follow large values and small values tend to follow small values .the time series is uncorrelated if and anti - correlated if . using the dfa method, we analyze the price volatility and volume volatility time series by plotting in bins the relation between correlation exponent and the four financial factors , including stock lifetime , market capitalization , mean trading volume , and mean trading value . all the price volatility and volume volatility correlation exponents are significantly larger than 0.5 , suggesting the presence of long - term memory in both price volatility sequences and volume volatility sequences . in all of the plots ,the price volatility series shows a stronger long - term correlation than the volume volatility series .moreover , as shown in fig .[ fig6](a ) , on average increases for the stocks with a lifetime ranging from 350 days to 3800 days ( about 15 years ) , and then shows a slight decrease , suggesting that long - lasting stocks tend to have a persistent price and volume movement on large scales .the increasing exponent indicates that the volume volatility of older stocks is more correlated than that of younger stocks .this is consistent with the indication in fig .[ fig2](a ) that the volume volatility of older stocks are more auto - correlated .figures [ fig6](b ) , [ fig6](c ) , and [ fig6](d ) show that there is no systematic tendency relation between and market capitalization , trading volume , and trading value .we have shown the scaling properties and memory effect of volume volatility return intervals in large stock records of the u.s .the scaled distribution of volume volatility return intervals displays unique power - law tails for different thresholds .we also find different power - law exponents of for the four essential stock factors : stock lifetime , market capitalization , average trading volume , and average trading value .these different exponents may be related to long - term correlations in the interval series .significantly , the daily volume volatility exhibits long - term correlations , similar to that found for price volatility .the conditional probability , for following a certain interval , indicates that volume return intervals are short - term correlated .g. w. schwert , j. financ .* 44 * , 1115 ( 1989 ) ; k. chan , k. c. chan , and g. a. karolyi , rev .financ . stud . * 4 * , 657 ( 1991 ) ; t. bollerslev , r. y. chou , and k. f. kroner , j. econometr . * 52 * , 5 ( 1992 ) ; a. r. gallant , p. e. rossi , and g. tauchen , rev .financ . stud . * 5 * , 199 ( 1992 ) ; b. le baron , j. business * 65 * , 199 ( 1992 ) . | we study the daily trading volume volatility of 17,197 stocks in the u.s . stock markets during the period 19892008 and analyze the time return intervals between volume volatilities above a given threshold . for different thresholds , the probability density function scales with mean interval as and the tails of the scaling function can be well approximated by a power - law . we also study the relation between the form of the distribution function and several financial factors : stock lifetime , market capitalization , volume , and trading value . we find a systematic tendency of associated with these factors , suggesting a multi - scaling feature in the volume return intervals . we analyze the conditional probability for following a certain interval , and find that depends on such that immediately following a short / long return interval a second short / long return interval tends to occur . we also find indications that there is a long - term correlation in the daily volume volatility . we compare our results to those found earlier for price volatility . |
the objective of this paper is to set the foundation for a research program on quantum knots[multiblock footnote omitted ] . _ for simplicity of exposition , we will throughout this paper frequently use the term `` knot '' to mean either a knot or a link . _in part 1 of this paper , we create a formal system consisting of * a graded set of symbol strings , called * knot mosaics * , and * a graded subgroup , called the * knot mosaic ambient group * , of the group of all permutations of the set of knot mosaics .we conjecture that the formal system fully captures the entire structure of tame knot theory .three examples of knot mosaics are given below : {cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } ] , and {cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \left ( n\overset{\left ( 0,1\right ) } { \longleftrightarrow}n^{\prime } \right ) \left ( \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \right ) = \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut08.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut07.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \left ( n\overset{\left ( 0,1\right ) } { \longleftrightarrow}n^{\prime } \right ) \left ( \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut08.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut07.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \right ) \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } n\overset{\left ( 0,1\right ) } { \longleftrightarrow}n^{\prime}\left ( n\overset{\left ( 0,1\right ) } { \longleftrightarrow}n^{\prime } \right ) \left ( \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \right ) \begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } n\overset{\left ( 0,1\right ) } { \longleftrightarrow}n^{\prime} ] denotes either both or any one of the two tiles {ut01.ps}} ] . *notational convention 2 .* _ it is to be understood that each mosaic move _ _ denotes either all or any one ( depending on context ) of the moves obtained by simultaneously rotating _ _ and _ _ about their respective centers by _ _ _ , _ _ _ _ , _ _ _ _ , or _ _ _ degrees . _ for example, {cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \overset{\left ( 0,1\right ) } { \longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\end{array}\ ] ] represents either all or any one ( depending on context ) of the following four 2-moves : {cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \overset{\left ( 0,1\right ) } { \longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\end{array } \qquad\qquad\qquad\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps}}\end{array } \overset{\left ( 0,1\right ) } { \longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\end{array } \\ & \\ & \begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\end{array } \overset{\left ( 0,1\right ) } { \longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \qquad\qquad\qquad\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\end{array } \overset{\left ( 0,1\right ) } { \longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps}}\end{array}\end{aligned}\ ] ] as our final notational convention , we have : * notational convention 3 .* _ finally , we omit the location superscript _ , and write to denote either all or any one ( depending on context ) of the possible locations .* caveat : * _ we caution the reader that throughout the remainder of this paper , we will be using all of the above nondeterministic notational conventions . _ as an analog to the planar isotopy moves for standard knot diagrams , we define for mosaics the 11 * mosaic planar isotopy moves * given below:{c} \end{tabular } \ \ \ \ } \ ] ] {l} \end{tabular } \ \ \ } \qquad\underset{p_{3}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] {l} \end{tabular } \ \ \ } \qquad\underset{p_{5}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] {l} \end{tabular } \ \ \ } \qquad\underset{p_{7}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] {l} \end{tabular } \ \ \ } \qquad\underset{p_{9}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] {l} \end{tabular } \ \ \ } \qquad\underset{p_{11}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] the above set of 11 planar isotopy moves was found by an exhaustive enumeration of all 2-mosaic moves corresponding to topological planar isotopy moves . the completeness of this set of moves , i.e. , that every planar isotopy moves for mosaics is a composition of a finite sequence of the above planar isotopy moves , is addressed in section 2.7 of this paper . as an analog to the reidemeister moves for standard knot diagrams , we create for mosaics the * mosaic reidemeister moves*. the * mosaic reidemeister 1 moves * are the following : {l} \end{tabular } \ \ \ } \qquad\underset{r_{1}^{\prime}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] and the * mosaic reidemeister 2 moves * are given below : {l} \end{tabular } \ \ \ } \qquad\underset{r_{2}^{\prime}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \\ & \\ & \underset{r_{2}^{\prime\prime}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \qquad\underset{r_{2}^{\prime\prime\prime}}{\begin{tabular } [ c]{l} \end{tabular } \ \ \ } \ ] ] for describing the mosaic reidemeister 3 moves , we will use for simplicity of exposition the following two additional notational conventions : * notational convention 4 . *_ _ we will make use of each of the following tiles__{ul.ps}}\quad{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ur.ps}}\quad{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ll.ps}}\quad{\includegraphics [ height=0.3269 in , width=0.3269 in ] { lr.ps}}\text { , } \ ] ] _ also called * nondeterministic tiles * , to denote either one of two possible tiles . _ for example, the nondeterministic tile {ul.ps}}\begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ula.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut03.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { llb.ps}}\end{array } \longleftrightarrow\begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ura.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut07.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { lrb.ps}}\end{array } \begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ula.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut03.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { llb.ps}}\end{array } \longleftrightarrow\begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ura.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut07.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { lrb.ps}}\end{array } ] {l} \end{tabular } \ } \qquad\underset{r_{3}^{\prime\prime\prime}}{\begin{tabular } [ c]{l} \end{tabular } \ } \begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ula.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut03.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { llb.ps}}\end{array } \longleftrightarrow\begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ura.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut07.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { lrb.ps}}\end{array } \begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ula.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut03.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { llb.ps}}\end{array } \longleftrightarrow\begin{array } [ c]{ccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ura.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { iut07.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { iut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { lrb.ps}}\end{array } ] as noted in a previous section , all mosaic moves are permutations on the set of mosaics .in particular , the planar isotopy moves and the reidemeister moves lie in the permutation group of the set of mosaics .it easily follows that the planar isotopy moves and the reidemeister moves also lie in the group of all permutations of the set of knot mosaics .hence , we can make the following definition : we define the ( * knot mosaic * ) * ambient group * as the group of all permutations of the set of knot -mosaics generated by the mosaic planar isotopy and the mosaic reidemeister moves .it follows from a previous proposition that the mosaic planar isotopy moves and reidemeister moves , as permutations , are each the product of disjoint transpositions .the completeness of the set of planar isotopy and reidemeister moves is addressed in section 2.7 of this paper .we now are prepared to define the analog of knot type for mosaics .we define the * mosaic injection * {rrr}\iota:\mathbb{m}^{(n ) } & \longrightarrow & \mathbb{m}^{(n+1)}\\ m^{(n ) } & \longmapsto & m^{(n+1)}\end{array}\ ] ] as{cl}m_{ij}^{(n ) } & \text{if } 0\leq i , j < n\\ & \\\raisebox{-0.1003in}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & \text{otherwise}\end{array } \right.\ ] ] thus , {cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \overset{\iota}{\longrightarrow}m^{(n+1)}=\begin{array } [ c]{ccccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array}\ ] ] we now can explicitly define the * graded system * that was mentioned in the introduction . the symbol denotes the directed system of sets and denotes the directed system of permutation groups .thus , two -mosaics and are said to be of the * same knot * **-type * * , written provided there is an element of the ambient isotopy group which transforms into .an -mosaic and an -mosaic are said to be of the * same knot mosaic type * , written provided there exists a non - negative integer such that , if , then or if , then where , for each non - negative integer , denotes the -fold composition . in the introduction of this paper , we conjecture that the formal ( re - writing ) system of knot mosaics fully captures the entire structure of tame knot theory .we now explain in greater detail what is meant by this conjecture .let denote the set of integers , and the two dimensional euclidean plane .let denote the * square tiling * of induced by the sublattice of , and for each , in , let denote the subregion of defined by let be an arbitrary tame knot in -space . a knot diagram of , i.e., a regular projection is said to be a * mosaic knot diagram * if * the image under of lies in the first quadrant of , and * for all , in , the pair is identical with the cell pair on one of the faces of the 11 tiles , , , . clearly , using standard arguments in knot theory, one can prove that every tame knot ( or link ) has a mosaic knot diagram .each mosaic knot diagram of a knot can naturally be identified with a knot -mosaic , where is the smallest positive integer such that lies in the region moreover , every knot -mosaic can naturally be identified with the diagram of a knot .we call this associated knot mosaic a * ( knot * ) * mosaic representative * of the original knot .this leads us to the following conjecture : let and be two tame knots ( or links ) , and let and be two arbitrary chosen mosaic representatives of and , respectively .then and are of the same knot type if and only if the representative mosaics and are of the same knot mosaic type .in other words , knot mosaic type is a complete invariant of tame knots .our sole purpose in creating the formal system of knot mosaics was to create a framework within which we can explicitly define what is meant by a quantum knot .we are finally in a position to do so .we begin by assigning a left - to - right * linear ordering , * denoted by ` ' , to the 11 mosaic tiles as indicated below {ut00.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{1}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut01.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{2}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut02.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{3}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut03.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{4}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut04.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{5}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut05.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{6}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut06.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{7}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut07.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{8}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut08.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{9}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut09.ps}}}\begin{array } [ c]{c}<\\ \mathstrut \end{array } \underset{t_{10}}{{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut10.ps}}} ] . the natural log of is where denotes an arbitrary integer . hence , the natural log , , of the unitary transformation is {cc}\left ( \begin{array } [ c]{cccc}\left ( 2s_{1}+1\right ) \left ( \sigma_{0}-\sigma_{1}\right ) & o & \ldots & o\\ o & \left ( 2s_{2}+1\right ) \left ( \sigma_{0}-\sigma_{1}\right ) & \ldots & o\\ \vdots & \vdots & \ddots & \vdots\\ o & o & \ldots & \left ( 2s_{\ell}+1\right ) \left ( \sigma_{0}-\sigma _ { 1}\right ) \end{array } \right ) & o\\ o & o_{\left ( n-2\ell\right ) \times\left ( n-2\ell\right ) } \end{array } \right ) ] for all and for all .* = 0 ] denotes the commutator of operators and .the remaining half of this section is devoted to finding an answer to the following question : * question : * _ how do we find observables which are quantum knot invariants ? _one answer to this question is the following theorem , which is an almost immediate consequence of the definition of a minimum invariant subspace of : let be a quantum knot system , and let be a decomposition of the representation into irreducible representations of the ambient group .then , for each , the projection operator for the subspace is an observable which is a quantum knot -invariant. here is yet another way of finding quantum knot invariants : let be a quantum knot system , and let be an observable on the hilbert space .let be the stabilizer subgroup for , i.e. , then the observable is a quantum knot -invariant , where denotes a sum over a complete set of coset representatives for the stabilizer subgroup of the ambient group .the observable obviously an quantum knot -invariant , since for all .if we let denote the order of , and if we let denote a complete set of coset representatives of the stabilizer subgroup , then is also a quantum knot invariant .we end this section with an example of a quantum knot invariant : the following observable is an example of a quantum knot -invariant : {cccc}{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut10.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut06.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut05.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\end{array } \right\rangle \left\langle \begin{array } [ c]{cccc}{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut10.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut06.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut05.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\end{array } \right\vert + \left\vert \begin{array } [ c]{cccc}{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut10.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut07.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\end{array } \right\rangle \left\langle \begin{array } [ c]{cccc}{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut02.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut10.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut01.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut07.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut09.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps}}\\{\includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut03.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut04.ps } } & { \includegraphics [ height=0.1773 in , width=0.1773 in ] { ut00.ps}}\end{array } \right\vert } } \quad\underset{k_{1}=\text { 000 - 021 - 034}}{\fbox{ } } \quad\underset{k_{2}=\text { 000 - 210 - 349}}{\fbox{ } } \quad\underset{k_{3}=\text { 000 - 251 - 354}}{\fbox{ } } \quad\underset{k_{4}=\text { 021 - 034 - 000}}{\fbox{ } } } } \quad\underset{k_{6}=\text { 021 - 246 - 354}}{\fbox{ } } \quad\underset{k_{7}=\text { 021 - 274 - 340}}{\fbox{ } } \quad\underset{k_{8}=\text { 021 - 284 - 340}}{\fbox{ } } \quad\underset{k_{9}=\text { 021 - 294 - 340}}{\fbox{ } } } } \quad\underset{k_{11}=\text { 210 - 340 - 000}}{\fbox{ } } \quad\underset{k_{12}=\text { 210 - 371 - 034}}{\fbox{ } } \quad\underset{k_{13}=\text { 210 - 381 - 034}}{\fbox{ } } \quad\underset{k_{14}=\text { 210 - 391 - 034}}{\fbox{ } } } } \quad\underset{k_{16}=\text { 210 - 631 - 354}}{\fbox{ } } \quad\underset{k_{16}=\text { 210 - 660 - 340}}{\fbox{ } } \quad\underset{k_{18}=\text { 251 - 316 - 034}}{\fbox{ } } \quad\underset{k_{19}=\text { 251 - 354 - 000}}{\fbox{ } } } } \quad\underset{k_{21}=\text { 251 - 624 - 340}}{\fbox{ } } $ ]so far , we have only discussed unoriented objects in this paper , e.g. , unoriented mosaics , unoriented knot mosaics , unoriented quantum knots , and so forth . in this appendix, we briefly discuss how all these unoriented objects can be transformed into oriented ones . called ( * oriented * ) * tiles*. we often will also denote these tiles respectively by the symbols moreover , we will frequently omit the superscript ` ' ( standing for ` oriented ' ) when it can be understood from context .let be a positive integer .we define an * ( oriented ) * **-mosaic * * as an matrix of ( oriented ) tiles with rows and columns indexed from to .we let also denote the * set of oriented * **-mosaics**. two examples of oriented -mosaics are shown are shown below:{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot01.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot08.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot12.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot22.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot08.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot09.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot27.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot11.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot15.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot06.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot07.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot04.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot12.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array } \qquad\qquad\qquad\begin{array } [ c]{cccc}{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot05.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot06.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot21.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot24.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot05.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot02.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot07.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot23.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot08.ps}}\\{\includegraphics [ height=0.3269 in , width=0.3269 in ] { ot07.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot03.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ot08.ps } } & { \includegraphics [ height=0.3269 in , width=0.3269 in ] { ut00.ps}}\end{array}\ ] ] a * connection point * of an oriented tile is defined as the midpoint of a tile edge which is either the beginning or ending point of an oriented curve drawn on the tile . we define the * sign * of a connection point as * minus * or * plus * accordingly as it is either the beginning point or the ending point of oriented tile curve .we say that two tiles in an oriented mosaic are * contiguous * if they lie immediately next to each other in either the same row or the same column .an oriented tile within a oriented mosaic is said to be * suitably connected * if each its connection points touches a connection point of opposite sign of a contiguous tile .an * oriented knot * -*mosaic * is an oriented mosaic in which all tile connection points are suitably connected .we also let denote the subset of of all oriented -mosaic knots . | in this paper , we give a precise and workable definition of a * quantum knot system * , the states of which are called * quantum knots*. this definition can be viewed as a blueprint for the construction of an actual physical quantum system . moreover , this definition of a quantum knot system is intended to represent the quantum embodiment " of a closed knotted physical piece of rope . a quantum knot , as a state of this system , represents the state of such a knotted closed piece of rope , i.e. , the particular spatial configuration of the knot tied in the rope . associated with a quantum knot system is a group of unitary transformations , called the * ambient group * , which represents all possible ways of moving the rope around ( without cutting the rope , and without letting the rope pass through itself . ) of course , unlike a classical closed piece of rope , a quantum knot can exhibit non - classical behavior , such as quantum superposition and quantum entanglement . this raises some interesting and puzzling questions about the relation between topological and quantum entanglement . the * knot type * of a quantum knot is simply the orbit of the quantum knot under the action of the ambient group . we investigate quantum observables which are invariants of quantum knot type . we also study the hamiltonians associated with the generators of the ambient group , and briefly look at the quantum tunneling of overcrossings into undercrossings . a basic building block in this paper is a * mosaic system * which is a formal ( rewriting ) system of symbol strings . we conjecture that this formal system fully captures in an axiomatic way all of the properties of tame knot theory . |
one - dimensional totally asymmetric simple exclusion process ( tasep ) has been studied for many years .it is one special type of aseps which represent one of the basic models studying non - equilibrium behavior of particle transport along one - dimensional lattices .asep was first introduced in for the explanation of ribosome inside the cell of creature .now , asep has also been used to simulate a lot of physical processes including surface diffusion , traffic model , and molecular motors , , etc .tasep , as a special case of asep , has two identical characters .one is that the model is discrete , which means that it is studied in finite time intervals .the other is that the particles in the lattices can only move in one direction .the analytical solutions under open boundary conditions had been obtained in .under open boundary conditions , the solutions yield phase diagrams with three phases . at small values of injection rates and , the system is found in a low - density entry - limited phase where where and are the densities at the entrance , exit and the bulk of the lattice far away from the boundaries , respectively . denotes the flux . at small values of extraction rates and , the system is in a high - density exit - limited phase with + at large values of the injection ( ) and extraction ( ) rates the system is in a maximal current phase with + a large number of varieties of extensions of taseps have been investigated , such as tasep with hierarchical long - range connections , two lane situations , two speed tasep , the effect of defect locations , particle - dependent hopping rates , and so forth .recently , j. brankov and e. pronina studied an asep with two chains in the middle of the filament .they supposed that a particle chooses to move into these two chains with equal probability , 0.5 and these two chains have the same length .then in 2007 , yao - ming yuan and rui jiang investigated a tasep with a shortcut in the middle .they set a possibility for a particle to jump through the shortcut when it faces the bifurcation , and they set the length of the shortcut to be zero .unfortunately , they did not state the situation clearly . in this paper , we will investigate tasep with one or two shortcuts respectively .the length of the shortcut is also assumed to be zero .for example , the filament on which the motor moves may be twisted as figure 1(a ) , a motor may have a chance to jump directly from site to , as shown in figure 1(b ) .to the basic model , i.e. there is only one shortcut along the filament .it is natural for us to divide the whole filament into three segments . as shown in figure 1(b ) , a molecular motor will face a choice of whether to jump through the shortcut or to move ordinarily through segment 2 , when it reaches site .an important problem is that the particle at site may have to wait to go through the shortcut if there is also another particle occupying site . which of them to go first should be determined clearly .after the basic model investigated , we would like to do some research work on our advanced model 1 .in advanced model 1 , there are two shortcuts which begin and end at different sites .these two shortcuts stay respectively along the filament , that is to say , shortcut 2 begins after shortcut 1 ends , as shown in figure 2 . also , considering the difference of places where the shortcuts stay , we have advanced model 2 , as shown in figure 3 .in advanced model 2 there are also two shortcuts .but they begin at the same site and end at different , which means when a motor moves into site , it will choose whether to pass through shortcut 1 or shortcut 2 or to go ordinarily through segment 2 . in the following ,we firstly give the detailed description of the different models , then we will discuss their phase situation of the corresponding segments theoretically , which is followed by the numerical simulations .[ fig : side : a ] [ fig : side : b ] [ fig : side : a ] [ fig : side : b ] [ fig : side : a ] [ fig : side : b ]first , we divide the lattices into three segments , as shown in figure 1(b ) .segment 1,2,3 start from site and end in site , respectively . in this model ,all the particles traveling in the lattices are identical to each other . at time , we may suppose that the lattices are empty .as time goes , there will be a particle in site 1 .this particle will pass continuously through the whole filament .given a site , in an infinitesimal time interval , if , a particle is inserted with probability , provided the site is empty , if and there is a particle in it , then the particle in site l is extracted with probability , and if and site is occupied , the particle in site will move into site with probability , providing site is empty .however , if a particle is in site or , we have to clarify the way it moves .although yao - ming yuan , rui jiang have discussed the method of choice at site , the situation at site has not been studied clearly .it is possible that when both site and site have particles , it may produce a collision in site .if site has a particle , then the particle will choose the way as following : if site and site are both occupied , then the particle in site does not move . if site is occupied but site is empty , then the particle in site will move into site with probability . if site is occupied but site is empty then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if both site and site are empty , then the particle in site will move into site with probability , and : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability .if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability .+ if site is occupied , then the particle in site will move into site with probability .+ the character of this model is that the particle in site first choose whether to move through the shortcut and once it has decided to jump through the shortcut , it will face the problem of whether site is occupied .if site is occupied , then the particle which has chosen the shortcut can only move into site with probability in order to prevent collision .this means that the particle takes a risk to jump the shortcut because once it has chosen the shortcut it can not go back into segment 2 , even if site is empty .if site has a particle , then the particle will choose the way as following : if site and site are both occupied , then the particle in site does not move . if site is occupied but site is empty , then the particle in site will move into site with probability .+ if site is occupied but site is empty then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if both site and site are empty , then : if site is empty , the particle in site will move into site with probability , and it will move into site with probability . if site is occupied , the particle in site will move into site with probability , and it will move into site with probability . if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability . if site is occupied , then the particle in site will move into site with probability .the character of this model is that once the particle has chosen the shortcut but see that the particle in site has priority to move into site , it can change its mind and return to site , providing site is empty .this model is closer to reality because actually bus drivers can change their mind in time when they are told that the road ahead is to some extent blocked .it is not reasonable for drivers to take a shortcut which is actually difficult to go through .now we divide the lattices into five segments , as shown in figure 2(b),segment 1,2,3,4,5 starts in site and ends in site ,respectively . in this model ,all the particles traveling in the lattices are identical to each other . at time , we may suppose that the lattices are empty .as time goes , there will be a particle goes into site 1 .this particle will pass continuously through the whole filament . given site , in an infinitesimal time interval , if , a particle is inserted with probability , provided the site is empty , if and there is a particle in it , then the particle in site l is extracted with probability , and if and site is occupied , the particle in site will move into site with probability , providing site is empty .however , if a particle is in site or or or , we have to clarify the way it moves .we will construct advanced model 1 using similar method according to basic model(situation 1 ) .if site has a particle , then the particle will choose the way as following : if site and site are both occupied , then the particle in site does not move . if site is occupied but site is empty , then the particle in site will move into site with probability . if site is occupied but site is empty then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if both site and site are empty , then the particle in site will move into site with probability , and : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability .if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability .if site is occupied , then the particle in site will move into site with probability .if site has a particle , then the particle will choose the way as following : if site and site are both occupied , then the particle in site does not move . if site is occupied but site is empty , then the particle in site will move into site with probability . if site is occupied but site is empty then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if both site and site are empty , then the particle in site will move into site with probability , and : if site is empty , the particle in site will move into site with probability .if site is occupied , the particle in site will move into site with probability .if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability . if site is occupied , then the particle in site will move into site with probability .also we can divide the lattices into four segments , as shown in figure 3(b ) , segment 1,2,3,4 starts in site and ends in site ,respectively . in this model ,all the particles traveling in the lattices are identical to each other . at time , we may suppose that the lattices are empty .as time goes , there will be particles goes into site 1 .these particles will pass continuously through the whole filament . given site , in an infinitesimal time interval , if , a particle is inserted with probability , provided the site is empty , if and there is a particle in it , then the particle in site l is extracted with probability , and if and site is occupied , the particle in site will move into site with probability , providing site is empty . however , if a particle is in site or or , we have to clarify the way it moves .we will construct advanced model 2 using similar method according to basic model(situation 1 ) .if site has a particle , then the particle will choose the way as following : + if sites , and are all occupied , the the particle in site does not move . if site is empty but sites and are occupied , the particle in site will move into site with probability . if site is empty but sites and are occupied , then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site is empty but sites and are occupied , then : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site and site are empty but site is occupied , then the particle will move into site with probability , and : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site and site are empty but site is occupied , then the particle will move into site with probability , and : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site and site are empty but site is occupied , then : + if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if sites , and are all empty , then the particle in site will move into site with probability , and : if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability . if site is empty , the particle in site will move into site with probability . if site is occupied , the particle in site will move into site with probability .if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability . if site is occupied , then the particle in site will move into site with probability .if site has a particle , and site is empty , then the particle will choose the way as following : if site is empty , then the particle in site will move into site with probability . if site is occupied , then the particle in site will move into site with probability .in this section , we will give a theoretical analysis of the phase situation of three segments of our basic model .we choose situation 1 of our basic model described above as the base of our analysis .according to figure 1(b ) , we are able to express the insertion and extraction rates using densities of certain sites .these rates are the following : \\ \alpha_{eff2}=\rho_{k_{1}}[\rho_{k_{2}}+(1-\rho_{k_{2}})(1-q)]\\ \beta_{eff2}=(1-\rho_{k_{2}})[(1-\rho_{k_{1}})+q(1-p)\rho_{k_{1}}+(1-q)\rho_{k_{1}}]\\ \alpha_{eff3}=\rho_{k_{1}}(1-\rho_{k_{2}-1})q+\rho_{k_{2}-1}(1-\rho_{k_{1}})+\rho_{k_{1}}\rho_{k_{2}-1}\\ \beta_{eff3}=\beta\\ j_{sc}=q\rho_{k_{1}}(1-\rho_{k_{2}})[1-\rho_{k_{2}-1}+p\rho_{k_{2}-1 } ] \end{array}\ ] ] certainly , the flux should be conserved : , we are going to prove that segment 1 and segment 3 should be in the same phase .if segment 1 is in maximum - current phase , then we have according to ( 3 ) . according to ( 5 ) , we have .so segment 3 is also in maximum - current phase .secondly , we claim that the following four cases are impossible to prove that , we firstly focus on the first two situations . if the phase condition is , due to ( 1 ) , we have : because , we obtain : when , from ( 7 ) we have , which contradicts . thus the phase situations are impossible .if the phase condition is , it is difficult for us to analysis the original model with parameter .so we set , which means that the particle in site has priority to move into site when site is occupied .due to ( 2 ) , we have : from which we can obtain : let , then ( 9 ) turns into : due to ( 1 ) , we have : \ ] ] the equation above can be turned into : from ( 10 ) and ( 12 ) , we reach : if , then does not agree with reality .so could not be 0 .thus .for all ] , which implies .so segment 2 can not be in low - density phase .if segment 2 is in high - density phase , then we have : from ( 34 ) we know that or , which will lead to or that does not make sense when .so we have excluded .if the segment 3 is in high - density phase , then from ( 35 ) we know that or , which will lead to or that does not make sense . is also impossible .now we have proved that segment 1 and segment 4 are in the same phase .we now discuss the situation . from the proof above we know that segment 3 can not be in high - density phase , either can not segment 2 .thus the phases can only be . while considering , we should first assume that segment 3 be in low - density phase .from the proof above we will know that segment 2 can not be either in low or in high - density phase . and if we assume segment 3 to be in high - density phase , we know that segment 2 can not be in low - density phase .thus the phases can only be .to the situation , it is rather difficult for us to analyze it theoretically .so we also use numerical simulations to get the results .in this section , we use numerical methods to simulate basic models . first we demonstrate the results of situation 1 and situation 2 . according to analysis in last section, we set insertion rate and extraction rate to be 0.3 and 0.8 for , 0.8 and 0.3 for , 0.8 and 0.8 for .the results of the simulations are plotted in figure 4 . ;( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : a ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : b ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : c ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : d ] from figure 4 , we can see that the results of numerical solutions are in agreement with the results of equation analysis .when and , all the three segments are in low - density phase .when and , all the three segments are in high - density phase . when , segment 1 and segment 3 are in maximum - current phase .from figure 4(a ) , we can find that the density in segment 2 decreases when increases , providing is constant .for the same , when increases , the density in segment 2 decreases . from figure 4(b ) , we see that the density in segment 2 increases when increases , providing is constant . for the same , when increases , the density in segment 2 increases .this is because when is relatively large , more particles are able to jump through the shortcut to reach site , thus leaving particles in segment 2 in a serious traffic jam . from figure 4(c ) and ( d ) , we see that when segment 1 and segment 3 are both in maximum - current phase , segment 2 can be either in high - density phase or low - density phase , depending on the initial density rate of each lattice. if the average initial density rate of the lattices is high , then segment 2 will be in high - density phase when the flux becomes stable .if the average initial density rate of the lattices is low , then segment 2 will be in low - density phase when the flux becomes stable .now we use numerical methods to simulate situation 2 of basic model . the insertion and extraction ratesare set as the same as in the simulation of situation 1 . ;( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : a ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : b ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : c ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : d ] from figure 5 we are able to see that the results of situation 2 of basic model seem to be similar to the results of situation 1 . but after careful inspection , we may find that there are several differences between the results of the two situations , which manifest the distinct characters of these two situations .in order to do further investigation , we use figures which can show the differences between two situations .we calculate the values which are obtained by subtracting the density of each lattice of situation 2 from the density of each lattice of situation 1 .when we do these subtractions , we keep parameters and in constant . the results are shown in figure 6 . and extraction rate : ( a ) when and ; ( b ) when and ; ( c ) when and ; ( d ) when and .,title="fig:",width=240 ] [ fig : side : a ] and extraction rate : ( a ) when and ; ( b ) when and ; ( c ) when and ; ( d ) when and .,title="fig:",width=240 ] [ fig : side : a ] and extraction rate : ( a ) when and ; ( b ) when and ; ( c ) when and ; ( d ) when and .,title="fig:",width=240 ] [ fig : side : b ] and extraction rate : ( a ) when and ; ( b ) when and ; ( c ) when and ; ( d ) when and .,title="fig:",width=240 ] [ fig : side : c ] in figure 6(a ) and 6(b ) , we can see that the differences between the density of two situations are within , which means that when , there is nearly no difference between the two situations .that is because when , the particle at site in the figure ) has priority to get to site in the figure ) .so whatever may be , the particle can go smoothly once it chooses the shortcut .there is no difference whether it returns back to segment 2 or not . in figure 6(c ) and ( d ) , when , the differences seem to be obvious .when we contrast these two figures , we are able to find that the extent of difference is decided by parameter . while changes from 0.1 to 0.9 , the error changes from within to 0.015 .this is simply because the smaller means less particles to choose the shortcut .but if we compare figure 6(b ) with 6(d ) , it can apparently be found that the error changes greatly when is switched from 1 to 0 , providing is comparatively large .so we can say that it really matters whether a particle can return back or not when many particles choose the shortcut .if the particle can not return , a great traffic jam will happen at the certain site , reflected in figure 6(d ) as a high - positioned blue star on the top of the figure .now we use numerical methods to simulate advanced model 1 . in order to find how the phases of five segments change when parameters and change , we set , thus corresponding to our equation analysis in section 3 .we will see whether the numerical results correspond to our analytical results . ;( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : a ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : b ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : c ] ; ( b) ; ( c) , initial densities of all sites are high ; ( d) , initial densities of all sites are low.,title="fig:",width=240 ] [ fig : side : d ] from figure 7(a ) , we can see that all of the five segments are in low - density phase .the situation in segment 2 has nothing to do with the situation in segment 4 .we can treat them as a connection of two basic models .the density rate in segment 2 decreases when increases .the density rate in segment 4 decreases when increases . from figure 7(b ) , we can see that all of the five segments are in high - density phase .the situation in segment 2 has nothing to do with the situation in segment 4 .we can treat them as a connection of two basic models .the density rate in segment 2 increases when increases .the density rate in segment 4 increases when increases . from figure 7(c ) and 7(d ) , we can see that the situation in segment 2 and 4 are relevant to the initial density of each site .if the average initial density of all sites is high , then segment 2 and 4 will be both in high - density phase .if the average initial density of all sites is low , then segment 2 and 4 will be both in low - density phase .now we use numerical methods to simulate advanced model 2 . in order to find how the phases of four segments change when parameters and change , we set , thus corresponding to our equation analysis in chapter 3 .we will see whether the numerical results correspond to our analytical results . also , we are interested in whether there are interactions between the two parameters and . and : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : a ] and : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : b ] and : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : a ] and : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : b ] and with high initial density for each lattice : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : a ] and with high initial density for each lattice : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : b ] and with low initial density for each lattice : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : a ] and with low initial density for each lattice : ( a) changes when remains constant ; ( b ) changes when remains constant.,title="fig:",width=240 ] [ fig : side : b ] first , from figure 8 to figure 11 , we see that the numerical simulation results are corresponding to our analytical results. the four segments should be all in low - density phase or high - density phase , and when segment 1 and 4 are in maximum current phase , the phase of segment 2 and 3 will be based upon the initial density of each lattice .if we focus on figure 8 , we will find some interesting things by comparing figure 8(a ) and figure 8(b ) . in figure 8(a ) , we let change but remain constant . we see that both the densities in segment 2 and segment 3 decrease in the same pace when increases . in figure 8 ( b ) , we let change when remains constant .the result is that although the density in segment 2 decreases with increasing of , the density in segment 3 does not have apparent change .this phenomenon means that the change of can not influence the density in segment 3 greatly , but the change of can really influence the density in segment 2 greatly . from thiswe know that the parameter has priority to control the density of segment 2 and 3 . from figure 9 to 11 , if we compare ( a ) and ( b ) of each figure carefully , it is easy for us to find that for each figure , there exists the same phenomenon as in figure 8 . in conclusion : in advanced model 2 ,the density of segment 2 and 3 depends on parameter greatly .the change of will bring about the change of density of both of the segments . once is set constant, parameter will also has effects on the density in segment 2 , but the density in segment 3 will remain in a relatively stable condition .in this paper , tasep with one or two shortcuts have been analyzed in details .we have found that when a particle arrives at the beginning of the shortcut , it faces the choice of whether to jump through the shortcut or not .if it chooses the shortcut , then a problem exists that whether it can return back to the ordinary road when it finds the road ahead blocked , according to which we have two different situations . after the studywe know that if the particle can not return back , the beginning of the shortcut will most probably be filled with particles , producing a heavy traffic jam .this research offers us an idea that when we construct a road with a shortcut , the entrance of the shortcut , that is to say the place where the main road bifurcates , ought to be built widely .once a driver chooses to jump the shortcut but find the signal showing `` blocked ahead '' , he can immediately turn back to the ordinary road to prevent time - wasting . from the simulation of basic model , we find that no matter which situation is , the three segments should be in the following four phases : whether they are in or is based on the initial density of the lattices . for advanced model 1 we have found that the five segments of the model should be in the following four phases : for advanced model 2 we have found that the four segments of the model should be in the following four phases : from the simulation of advanced model 1 , we have found that this model can also be regarded as two basic models , the segment 3 of the first being the segment 1 of the second .the probabilities of the choices of a particle facing the two shortcuts do not interfere with each other , which means that the probability of the choice of the particle facing one of the shortcuts does not influence the density in the other segment .this phenomenon leads us to suppose that if there are many shortcuts at different places of the road , we can view them as a connection of basic models . from the simulation of advanced model 2, we have found that the second shortcut can decide the density of both of the two segments in the middle while the first shortcut can only decide the density in segment 2 .it offers us an idea that if we induce more drivers to go directly through the second shortcut , the density of the ordinary road will essentially decrease , providing the whole road is not so crowded. however , in this paper all of the proofs are under the condition .future work can be focused on how to proof the phase situations for all $ ] .also , two shortcuts overlapping each other will be studied in the future .99 g. m. schtz in _ phase transitions and critical phenomena _vol 19 , eds . c. domb and j. lebowitz ( academic , london , 2001 ) .y. zhang _ china journal of physics _( to appear ) 2009 j. t. macdonald , j. h. gibbsand and a. c. pipkin 1968 _ biopolymer _ 6 1 g. dor , b. liedke , and k .- h ._ physical review e _ 79 , 021125 ( 2009 ) s. cheybani , j. kertesz , m. schreckenberg _ physical review e _ 63 016108 ( 2000 ) y. zhang _ journal of statistical physics _ 134 ( 2009 ) 669 - 679 y. zhang _ biophysical chemistry _ 136 ( 2008 ) 19 - 22 b. derrida , m. r. evans , v. hakim and v. pasquier 1993 _ j. phys . a : math ._ 26 1493 g. m. schtz and e. domany 1993 _ j. stat ._ 72 277 j. otwinowski , s. boettcher _ tasep with hierarchical long - range connections _ arxiv:0902.2262 , 2009 t. mitsudo , h. hayakawa 2005 _ journal of physics a mathematical and general _ , 38 3087 a. borodin , p. l. ferrari , t. sasamoto _ two speed tasep _eprint arxiv : 0904.4655 , 2009 j. j. dong , b. schmittmann , r. k. p. zia 2007 _ physical review e _ 76 051113 a. rkos , g. m. schutz _ cond - mat_/0506525 , 2005 j. brankov , n. pesheva and n. bunzarova 2004 _ phys ._ 69 066128 e. pronina and a. b. kolomeisky 2005 _ j. stat .p07010 y. m. yuan , r. jiang , r. wang , m. b. hu , q. s. wu 2007 _ journal of physics a - mathematical and theoretical _ 40 12351 | in this paper , the operation of totally asymmetric simple exclusion process with one or two shortcuts under open boundary conditions is discussed . using both mathematical analysis and numerical simulations , we have found that , according to the method chosen by the particle at the bifurcation , the model can be separated into two different situations which lead to different results . the results obtained in this paper would be very useful in the road building , especially at the bifurcation of the road . |
bootstrap resampling , or sampling with replacement from the given data , is used to mimic the sampling process that produced the original sample in the first place . by randomly drawing items from the original dataset with equal probability, one is effectively drawing a fresh sample from the `` empirical distribution '' , a multinomial with equal probability assigned to each item in the original sample .this can be seen as a best guess at the true population distribution .it has been over 30 years since introduced the bootstrap to statistics , and now there exist many different applications and related resampling techniques are found in statistics and computer science .of particular interest to researchers in pattern recognition and machine learning are the bagging ensemble method of and the .632 + validation scheme of for supervised algorithms . in these settings , and others involving the construction of prediction rules on bootstrap samples , an important quantity of interest is the number of unique items from the original sample . in many cases , this can be seen as a limit on the amount of information carried down from the original sample . for learning algorithms trained on bootstrap samples ,the reduction in the number of unique items expected can be viewed as an effective reduction in training sample size , particularly in high dimensional problems where the prediction model is unstable with respect to changes in the training data .while generally it is a desirable property of learning algorithms that they are flexible and responsive to new information , there also exist specific cases where the unique items in a training sample are the sole determinant of the prediction rule . in ( hard - margin )support vector machine classification where the number of dimensions exceeds the number of items observed , it is always possible to divide the two classes in the training data .the prediction rule is determined only by the points that lie on the maximum margin separating hyperplane , whose duplication has no effect on its location .another simple example is found in nearest - neighbour classification and regression in continuous feature spaces where the probability of obtaining any truly identical training items is zero . without competitions between multiple prediction values due to items at exactly the same location , the resulting prediction rule is unaffected by duplications . as the number of unique items in a bootstrap sample is an important determinant of the behaviours of prediction rules learned on it , the distribution of this quantity should be of interest to researchers working on their development and validation .while related distributions have long been studied in a purer mathematical context , and this distribution has been identified before in this setting , nowhere were we able to find a concise and accessible summary of the relevant information for the benefit of researchers in machine learning .our aim here is to fill this gap by presenting this distribution along with its key properties , and to make it easier for others who to understand or modify resampling techniques in a machine learning context . in section [ background ], we discuss the relevance of the number of unique items to the bagging ensemble method and the .632 + validation scheme . in section [ present_dist ]we give a closed form of this distribution together with its notable properties .we give an empirically derived rule to decide whether normal approximation is permissible , and describe how we produced it . in section [ multi ] , we consider the case where the items in the original sample belong to one of several categories , as is the case in classification problems . here , where the outcome is the vector of the number of unique original items from each category , we show that the limit distribution is multivariate normal , and consider limits to produce a heuristic for the normal approximation of the number of unique items from a single category .bootstrap aggregation , or bagging , uses the perturbed samples generated by sampling with replacement to produce a diverse set of models which can then be combined to promote stability . though a given model may be over - fit to its sample , the combination should less reflect those unstable aspects particular to an individual sample .perhaps its most important use in pattern recognition has been in random forest classification and regression , where it is key to allowing the use of flexible decision trees without over - fitting .while bagging is still most commonly performed by drawing items with replacement , the way in which the samples are drawn is not necessarily fixed .the number of unique items present in a bootstrap sample has been identified as an important predictor of algorithm performance , and the sampling method can be purposefully modified to control its range and variability .an added bonus of bagging is the possibility of producing a performance estimate for the ensemble without doing additional cross validation .predictions are made for items in the training sample using only those models which did not use them in training . in this way , it is possible to get a performance estimate for an ensemble without the bias of over - fitting . a pessimistic bias has been observed in high dimensional classification problems that was ameliorated using different sampling techniques , suggesting the importance of unique items in this phenomenon .there is no one dominant scheme for the statistical comparison of learning algorithms on small to moderately sized datasets , and researchers must choose between different forms of hold out tests and cross validation .one option available to them is the .632 + bootstrap of .this , and the earlier .632 scheme , extend the bootstrap to the validation of learned prediction rules . both of these are functions of a quantity called the leave - one - out bootstrap error , which is the average error of the prediction models constructed on many bootstrap samples . to avoid the optimistic bias of testing on the training set ,the models are tested only on those points not in their respective bootstrap samples . as in bagging, the prediction rules created can have an effectively reduced training sample size in line with the lower number of unique original items in their samples .the expectation of the leave - one - out error then becomes a weighted average of the expected algorithm performance at the different sizes given by the distribution of that quantity . unlike conventional cross validation schemes where the reduced sample size is obvious , users of this techniquemay not have the effective reduction in mind . while both the .632 and .632 + methods attempt to correct for the pessimistic bias that may result , the true corrective mapping of this value will be problem specific and can not be known ahead of time .this is almost certainly the cause of the bias observed by .knowledge of the distribution of the effective sample sizes in these schemes should be useful to those interpreting the results of these schemes , particularly when there is concern about bias .in this section , we first provide a closed form of the distribution of the number of unique items and describe its connection to the family of occupancy distributions . with this information ,we are then able to derive the integer moments and pormal limit .finally , we describe the construction of a heuristic to decide when normal approximation is appropriate .some illustrations of the distribution are given in fig .[ onedim ] . where items are taken with replacement from a sample of size , the probability of obtaining unique original items is where , is the falling factorial , and is a stirling number of the second kind .equivalently , this is the probability of obtaining unique outcomes after categorical trials in a multinomial problem with equally likely outcomes . because each of the items is equally likely to be drawn at each of the sampling events , there are equally likely bootstrap samples that may be drawn if order is accounted for .the number of ways to make an ordered selection of unique items from the original to be present in the bootstrap sample is , the falling factorial .this is defined as follows : having already considered the ordering of the unique items to be included means that we may consider them unlabelled in the final step ; here , we consider the number of ways to take distinguishable draws from unlabelled unique items such that at least one draw is taken from each . in more generic terms, this is the number of ways to place labelled items into subsets such that none of them are empty .this quantity is written as , and is called a stirling number of the second kind ( * ? ? ?* chapter 6 ) . combining these three resultsgives us eq . .the number of unique items present may be viewed as the sum of the set of indicator functions .the indicator corresponds to original item , taking the value of one if this is present and zero if not .a simple consideration of the sampling process allows one to find the mean and covariance of the indicator functions .as all points are equally likely to be selected , the chance of an item not being selected at a given sampling trial is . in order for to be zero ,its corresponding point must not be selected at all events .this gives us .as the indicators are binary variables , this is sufficient to determine their mean and variance : & = { \text{p}}(d_i = 1 ) = 1 - { \text{p}}(d_i = 0 ) \\ & = 1 - \big ( 1 - \frac{1}{n } \big)^a , \\ \text{var}[d_i ] & = { \text{p}}(d_i = 0 ) \cdot { \text{p}}(d_i = 1 ) \\ & = \big ( 1 - \frac{1}{n } \big)^a - \big ( 1 - \frac{1}{n } \big)^{2a}. \end{split}\ ] ] a similar consideration gives us the probability of two items , and ( where ) both being excluded from a bootstrap sample .this is given , and is enough to give us the covariance of two indicator functions , & = { \text{p } } ( d_i = 1 , d_j = 1 ) - \big({\text{p}}(d_i = 1)\big)^2 \\ & = \big(1-\frac{2}{n}\big)^a - \big(1-\frac{1}{n}\big)^{2a}. \end{split}\ ] ] now that we know the mean , variance and covariance of the indicators , we have enough information to determine the same quantities for any linear combination of them , including their sum , .of particular interest to us is the case where the number of samples drawn is proportional to the number of original items .for this reason , along with the general formulae , we also provide limits for the mean and variance as and jointly approach infinity : & = n(1 - \big(1 - \frac{1}{n}\big)^a ) = \frac { n^a - ( n-1)^a } { n^{a-1 } } \\ & \to n(1 - e^{-\alpha } ) , \\ \text{var}[k ] & = n(n-1)\big(1-\frac{2}{n}\big)^a + n\big(1 - \frac{1}{n}\big)^a - n^2\big(1-\frac{1}{n}\big)^{2a } \\ & \to n(e^{-\alpha } - ( 1+\alpha)e^{-2\alpha } ) , \end{split}\ ] ] where the limits refer to the case and ., scaledwidth=55.0% ] while we came to the problem of item inclusion through machine learning , the study of this problem far predates our field of research .if , instead of the number of items included in the bootstrap sample , we were to consider the number of items _ excluded _ , we would be studying one of the family of occupancy distributions .these arise in the study of urn problems when balls are independently and randomly placed into urns in such a way that each ball is equally likely to be placed into any one of the urns .the occupancy distribution then details the probabilities of obtaining different values of , the number of urns containing exactly balls . in our problem ,the drawing events are the balls of the urn model , and the original items that may be selected are the urns with which the balls / draws can be associated .the number of empty urns is , and it has the same distribution as the number of excluded items in our sampling problem . as is just , it is easy to drawn upon useful results . while there exist a great many limit theorems for occupancy distributions ( ** chapter 6 ) , in our case we need only the earliest . proved that the limit distribution of was normal in the case that , and .as is distributed as , it too must be normally distributed .the raw moment of the distribution of is a proof for this is provided in appendix [ rawproof ] . provides a formula for the central moments of the occupancy distribution , which for us corresponds to the number of excluded items . by simplifying this and adaptingthe sign to the distribution of the number of items _ included _ , we can give the central moment as in both of these formulae the sum over and comprises unique locations .reassuringly , they provide the identity , as well as results for the mean and variance that match up to those derived by the considerations of [ indicators ] .the stirling numbers quickly overflow conventional precision , making the form of the distribution in eq . hard to work with at reasonable values of and .while it is known that the distribution does converge to normal , this is not practically useful unless one knows when a normal approximation is appropriate . to allow others to use a normal approximation with confidence, we decided to build a heuristic based on existing rules of thumb for normal approximation of the binomial distribution . where the number of bernoulli trials is and the probability of success is at any one , a normal approximation works best when the is high and is not too close to zero or one . while there is a degree of arbitrariness in deciding when two distributions are `` close enough '' , there have long existed conventions for doing so .two of the most common such rules from applied statistics are in order to build our own rule for approximation of , we decided first to measure the convergence of the true distribution and its normal approximation to measure the quality of the approximation of the binomial case under the rules of .we could then experiment with building rules to permit approximation of the distribution of such that convergence was always better than the worst case seen for the binomial approximation . to measure the quality of approximation , we chose a metric closely linked to the concept of convergence in distribution : the maximum absolute difference in cumulative distribution ( madcd ) between the true distribution and its continuity - corrected normal approximation on the set of all possible outcome values .the continuity correction was performed using a shift of .computation of the exact value of the true distribution was performed using the symbolic mathematics toolkit of the matlab software package . to find the worst permissible binomial approximation , we measured the madcd between the binomial distribution and its continuity - corrected normal approximation across the grid of parameters specified by and .we combined the two rules of eq . to produce a third , stricter rule , and found a maximum madcd of 0.0205 within the resulting acceptance region .as this value occurred far from the boundary of the allowed set , we think it unlikely that expanding the grid would produce higher values .we then mapped the madcd for and its continuity - corrected normal approximation across the grid of parameters and , so that we could begin constructing a rule .the results can be seen in fig .[ madcd_grid ] . after inspecting this map ,we decided to construct rules for the minimum permissible at a given ( and vice versa ) , of the form .while there are areas near and where the madcd value is low , this did not represent convergence to normal , rather , most of the probability mass has collapsed to a single value of .for this reason , we chose to focus on the central valley of good approximation near the axis . , scaledwidth=55.0% ] we were mainly concerned with producing a reliable rule that would not fail at parameter setting outside the mapped grid ; while we experimented with different fitting methods , we chose to select the rule parameters by hand so as to have greater control over features likely to make it more conservative .this also made it easier to restrict the rule parameters to those fully specified by two digits of decimal precision .we produced a rule such that * all madcd values in the acceptance region were below the specified level of 0.0205 , * the boundaries of the acceptance region did not cross the ridges of the distribution as and increase , unless it was to move towards the central valley , and * the values on the boundary had to appear to be decreasing with increasing and .we excluded the smallest values of and in order to better fit of the general pattern .the resulting rule is that a normal approximation is permissible when the acceptance region specified and the values of madcd at its boundaries can be seen in fig .[ madcd_grid ] and [ madcd_grid_bound ] respectively .in addition to the madcd , we also computed the jensen - shannon divergence between the true probability distribution and the normal approximation .as this is function of a probability distribution itself , rather than a cumulative distribution , it was necessary to discretise the normal approximation .this was done by defining the probability mass at a location as the difference in the continuity - corrected cumulative distribution between that location and the next .we found a maximum value of 0.0444 within the allowed region for the binomial distribution , and 0.0631 for the allowed region for the distribution of .all the highest divergence values for and its approximation occurred on the boundaries near the very lowest allowed values of and ., scaledwidth=55.0% ]in some machine learning tasks , notably in supervised classification , items in the sample belong to one of several categories .it is then of interest to those studying bootstrap techniques to know the distribution of the balance of the unique items drawn from each category .we consider the case where there are categories present , the of which contains items in the original sample of . when samples are drawn without replacement , the distribution can be found by marginalising over the number of draws taken from each category , the various .the probability of obtaining the vector detailing the number of items from each of category is an example of this distribution in the case of two categories is illustrated in fig .[ twodim ] . ,scaledwidth=55.0% ] from consideration of the mean and covariance of the individual items participations as in section [ indicators ] , it is straightforward to derive the mean , variance and covariance ; these are the following : & = n_i ( 1 - \big(1 - \frac{1}{n}\big)^a ) , \\ \text{var}[k_i ] & = n_i(n_i-1)\big(1-\frac{2}{n}\big)^a + n_i\big(1 - \frac{1}{n}\big)^a - n_i^2\big(1-\frac{1}{n}\big)^{2a } , \\ \text{cov}[k_i , k_j ] & = n_in_j\big[\big(1-\frac{2}{n}\big)^a - \big(1-\frac{1}{n}\big)^{2a } \big ] , \text { for . } \end{split}\ ] ] not only is the total number of unique items from the original sample asymptotically normally distributed under appropriate conditions , but so is the number from the subset belonging to each category . to show this , we must turn to limit theorems for dependent variables rather than the study of urn problems .the covariance between the presence indicators of section [ indicators ] is always negative ( see eq . ) .as these are binary variables , this means that the joint distribution of any two indicators and ( ) must meet the condition for any values and .any collection of the indicators can then be termed a pairwise negative quadrant dependent ( pnqd ) sequence .weighted sums of pnqd sequences are asymptotically normally distributed as the number of items in the sequence goes to infinity .the number of unique items present from the subset of a category is a weighted sum of the indicators , and so its asymptotic distribution is normal as the number of items present and items drawn jointly approach infinity . the number of items in a category must approach infinity too , as the number of possible values that the sum can take must also go to infinity for its discrete distribution to converge to a continuous one .this is guaranteed in the case where each category contains a fixed fraction of the total number of items . to show the limit of the joint distribution of the numbers of unique items from the categories is multivariate normal, we can consider the set of weighted sums that give equal weight to all items within a category . if the weight given to items in category is and we add the constraint , then the distributions of this set still include all possible projections of the multivariate distribution of .as all projections of this distribution are normal , then it itself must be multivariate normal .we now consider when it is appropriate to approximate the distribution of the number of unique items from a subset of the data with a normal distribution .we consider the case where the original items has items of which belong to a particular subset of fixed size , and where the number of items drawn is . in the case where , we have the heuristic developed in [ heuristic ] .as while remains fixed , the correlation between the indicator functions of the items of category goes to zero and the distribution of approaches a binomial distribution with trials and a probability of success .we argue that if both the distribution with and that with are well approximated by the binomial , it is highly likely that the intermediate stages are too .this suggests the following rule for normal approximation of : if the number of samples drawn is proportional to the number of items ( ) and the expected number drawn from the category is , then if both the rules for approximation of the binomial limit ( with and ) are met , and the rule for the single category case with and substituted for and respectively , then a normal approximation should be appropriate for for any value of .the resulting rule requires four inequalities to be met , which makes it a little unintuitive and tedious to apply .as the inequalities due to the consideration of the binomial limit only have a small effect on the acceptance region near the limit of lowest / highest , and eventually become redundant as and increase , it is possible to account for them with a simple offset of the rule for the single category case .a small strip of parameter space is then sacrificed for the sake of simplicity .we find a normal approximation to be permissible when this has no solutions with or .we have aimed to make a clear and concise answer to the titular question readily available to machine learning researchers .we have summarised the key properties of this distribution , and provided practical information about when a normal approximation is appropriate in the form of a heuristic , allowing others to justify its use . with particular consideration to classification problems, we have considered the generalisation of this distribution to the scenario where items come from multiple categories . in this case , we have provided a theoretical guarantee of asymptotic normality and a considered heuristic rule for approximation based its limit distributions .we hope our results rovide a useful resource to researchers interested in understanding or modifying the number of unique items to appear under random sampling with that replacement .this work is funded by ucl ( code elcx ) , a case studentship with the epsrc and ge healthcare , epsrc grants ( ep / h046410/1 , ep/ h046410/1 , ep / j020990/1 , ep / k005278 ) , the mrc ( mr / j01107x/1 ) , the eu - fp7 project vph - dare ( fp7-ict-2011 - 9 - 601055 ) , the nihr biomedical research unit ( dementia ) at ucl and the national institute for health research university college london hospitals biomedical research centre ( nihr brc uclh / ucl high impact initiative ) . 10 a. abadie and g. w. imbens . on the failure of the bootstrap for matching estimators . , 76(6):15371557 , 2008 .g. box , w. g. hunter , j. s. hunter , et al . .john wiley and sons new york , 1978 .l. breiman .out - of - bag estimation . technical report , citeseer . l. breiman . bagging predictors . , 24(2):123140 , 1996 . l. breiman . random forests ., 45(1):532 , 2001 .p. bchlmann and b. yu . analyzing bagging . ,pages 927961 , 2002 .b. efron .bootstrap methods : another look at the jackknife . , pages 126 , 1979 .b. efron .estimating the error rate of a prediction rule : improvement on cross - validation ., 78(382):316331 , 1983 .b. efron and r. tibshirani .improvements on cross - validation : the 632 + bootstrap method ., 92(438):548560 , 1997 . a. gut .springer , 2009 .n. l. johnson and s. kotz . .wiley , 1977 .estimating classification error rate : repeated cross - validation , repeated hold - out and bootstrap ., 53(11):3735 3745 , 2009 .d. e. knuth , r. graham , and o. patashnik . .addison , 1989 .li and j .- f .an application of stein s method to limit theorems for pairwise negative quadrant dependent random variables ., 67(1):110 , 2008 .m. w. mitchell .bias of the random forest out - of - bag ( oob ) error for certain input parameters ., 1:205 , 2011 .j. muoz - garca , r. pino - mejias , j. munoz - pichardo , and m. cubiles - de - la - vega .identification of outlier bootstrap samples ., 24(3):333342 , 1997 .r. pino - mejas , m .- d .jimnez - gamero , m .- d .cubiles - de - la vega , and a. pascual - acosta . reduced bootstrap aggregating of learning algorithms ., 29(3):265271 , 2008 . c. r. rao , p. pathak , and v. koltchinskii. bootstrap by sequential resampling ., 64(2):257 281 , 1997 .m. schader and f. schmid .two rules of thumb for the approximation of the binomial distribution by the normal distribution . , 43(1):2324 , 1989 .i. weiss . limiting distributions in some occupancy problems ., pages 878884 , 1958 .before proceeding , we shall need two intermediate results .* add one to the upper argument .* subtract one from the lower argument and multiply by minus one .* multiplication by offset between and the second argument of the stirling number ( i.e. , the initial plus the number of operations that have occurred to the number thus far ) . by multiplying by , we sequentially apply all these operations times . in a way similar to pascal s triangle, we can see this triplication as carrying a `` charge '' of stirling numbers down to the next level of a pyramid ( see _ a ) _ in fig .[ pyramid ] .locations in the pyramid can be described by the number of times each operator has been applied to all charge that reached them .we shall call these coordinates , and .any given selection of populates the pyramid down to the level described by . at any point in the pyramid ,the arguments of the stirling numbers are determined only by the coordinates .if these operators commuted , we could consider a flow of charge exactly the same as in pascal s triangle , and apply the effects of the operators on the coefficients of the stirling numbers separately at the end .as and do not commute with one another , we must consider their interaction and how they change the charge passing through them . commutes with the other operators and does not change the coefficient of the charge , so we can `` factorise '' the pyramid .that is , we need only consider the number of ways to select non- operations , and then to sum over all possible orders of choosing and operations of and respectively .the amount of charge at a location in the pyramid will be , where is the coefficient from the charge triangle of the operators and alone . to determine what this is, we use the fact that the in eq .is initially zero . see _b _ in fig . [ pyramid ] for illustration .this gives us the initial conditions and for .together with the recurrence , these are enough to uniquely specify .this then gives us eq . .we begin with the standard relation from eq . , it is clear that for values of higher than .consistent with their recurrence relation and combinatorial meaning , the stirling numbers can be defined as zero for positive values of outside . asone of these two factors is zero whenever exceeds either or , we can truncate the sum in [ standardsum ] to the smaller of those without effect .we can therefore write if we combine this with the knowledge that we can then produce eq . . if we take the definition of the moment as in the first line of eq ., apply eq ., reverse the order of summation , and then apply eq ., we are then able to derive the final form for the raw integer moments seen in the second line ( of eq . [ moments ] ) . | sampling with replacement occurs in many settings in machine learning , notably in the bagging ensemble technique and the .632 + validation scheme . the number of unique original items in a bootstrap sample can have an important role in the behaviour of prediction models learned on it . indeed , there are uncontrived examples where duplicate items have no effect . the purpose of this report is to present the distribution of the number of unique original items in a bootstrap sample clearly and concisely , with a view to enabling other machine learning researchers to understand and control this quantity in existing and future resampling techniques . we describe the key characteristics of this distribution along with the generalisation for the case where items come from distinct categories , as in classification . in both cases we discuss the normal limit , and conduct an empirical investigation to derive a heuristic for when a normal approximation is permissible . * keywords : * bootstrap , resampling , sampling with replacement , bagging |
one of the striking features of ordinary quantum mechanics is its linear structure . given two possible states of a system there is a notion of the sum of these two states .no such notion exists in classical mechanics .the phase space of classical systems does not have the structure of a vector space .in general we can not add the states of a classical system . since quantum mechanicsis the more fundamental theory the question arises of how this non - linear property of classical physics arises from the linear theory .this problem is usually called the measurement problem since it is in the process of a measurement on the system that the problem requires a solution . the solution to the problemis usually referred to as an interpretation . in this articlewe argue that no additional interpretation of ordinary quantum mechanics is required if only one carefully defines what is meant by a classical state .the relation of classicality to quantum mechanics is often blurred by the fact that we have constructed the quantum theory from the classical theory by a process of quantization .this means that the states of the classical system are also possible states of the quantum system .we regard this fact as one of the major reason for why the measurement problem exists .if the classical states are known from the outset the question arises of how the quantum system assumes one of these states . since the quantum system might be in a superposition of these classical states the measurement problem can not be escaped . in our presentationwe thus want to be more careful .we present quantum mechanics and its relation to classical mechanics in three steps . in the first step ( section [ sec : qm ] )we present the basic setup of quantum mechanics . herewe adhere to the usual linear structure .the state space is given by a hilbert space and the evolution is given by a unitary one - parameter group generated by a self - adjoint hamiltonian .we will not introduce any notion of classicality at this point and we do not talk about measurements yet . in the second step ( section [ sec : class ] ) we then introduce the notion of a classical property .the important point here is that it is a quantum system that has this classical property .no notion of classicality outside of quantum mechanics needs to be introduced . in the last step ( section [ sec : trans ] )we then look at the transition from quantum mechanical states to classical states .this is where one usually encounters the measurement problem .we will show how with our definition of classicality the measurement problem does not arise . in the concluding section [ sec : concl ] we comment on what these views of quantum mechanics mean for the search for a quantum theory of gravity .the basic setup we will use is that of standard quantum mechanics .we thus start with a hilbert space .an example of such a hilbert space could be the space of qbits : on this hilbert space the evolution of the system is described by a hamiltonian .it generates time evolution through the unitary this is the basic setup we will use .it is completely linear and deterministic .we will not add a collapse postulate that breaks the linearity .we will also not give a probabilistic interpretation of the states in or postulate the born rule . the most important difference to the standard formulation of quantum mechanics is that we do not have an a priori notion of classicality .when setting up a quantum mechanical calculation the hilbert space is often of the form an example is , where represents the space of positions of a particle .we do not want to start with such an identification of what a classical state is .the reason is that such an identification forces the measurement problem on us .if one assumes it from the outset the question for how the classical world arises is fixed : how does the state of the system become given that the initial state of the system was a superposition of states of this form one encounters the measurement problem .the inclusion of a classical world into the basic formulation of the theory is also unsatisfactory in that a less fundamental theory is needed to formulate the more fundamental one .we want to avoid this mix of ontologies and start with a quantum theory that has no notion of classicality built in .given that we have no a priori notion of classicality we now have to define what we mean by classicality .the definition we will use is the following : a system has a _ classical property _ the free energy describing the system is of the form for some constant . if a system has a classical property we can introduce a generalized force in solid state physics this force is referred to as generalized elasticity or rigidity . a large class of examples of systems with classical properties is provided by systems that have undergone symmetry breaking .the order parameter that is non - zero in the symmetry broken phase is the classical property .the easiest example here is spin chain where the classical property is the magnetization .symmetry breaking does not exhaust the class of examples of classical properties .the formation of a surface in the gas to liquid transition is an example that does not fit neatly into the symmetry breaking paradigm . herethe classical property is the surface .the corresponding generalized force is the surface tension .defined this way we see that a classical property is a _ property of a large quantum system_. the presence of the generalized force makes the classical property of the system observation independent .the interaction with a system that has a classical property is effectively described using this classical property .to infer what the magnetization of a spin chain is one might probe it with another spin chain with magnetization . their interaction is best described by a term of the form .this probing does not destroy the classical property .it becomes in this sense an objective , observer and observation independent , property .with this new definition of classicality we can now ask the question what happens when a classical property emerges .the key observation is that this transition from quantum to classical is discontinuous and very sensitive to the environment .we claim that it is the statistic nature of the environment that enters here that is responsible for the probabilistic nature of quantum mechanics . of the systemdoes not depend on . in the transition it acquires a dependence on .this transition is naturally discontinuous and very sensitive to the state of the environment as is the motion of the ball in this classical image .it is here that the probabilistic nature of quantum mechanics emerges . ]before the system acquires a classical property the free energy describing the system does not depend on .after the system has acquired the classical property it depends on .this change in behavior of the free energy is by necessity discontinuous .it is also very sensitive to the state of the environment .we see that the environment has three roles that are not usually acknowledged . because the system after the transition is in a state of lower entropy and energy the environment functions as a dump for the excess energy and entropy .the environment is further required to bring the system close to a transition point .this might be achieved by adjusting pressure or temperature to appropriate values .finally the environment is responsible for the stochastic character of the transition . to the last pointone might object that the environment should be symmetric and is thus unable to choose a particular classical property .the environment is indeed symmetric but it is only symmetric in an ergodic sense .if we denote by an element of the symmetry group of the environment and by the state of the environment at a particular time then in general we will have the state of the environment is non - symmetric because of the existence of non - symmetric fluctuations .if we define a time averaged state by then we will have indeed for large enough .this is because the non - symmetric fluctuations cancel each other .the presence of these non - symmetric fluctuations together with a transition that is very sensitive to the environment gives rise to the probabilistic appearance of classical properties .one can compare this situation to the situation in usual quantum mechanics . in the copenhagen interpretation of quantum mechanics the environment of the quantum systemis provided by a classical observer .it is the contact with this classical observer that leads to the collapse of the wavefunction and the introduction of probability . in our approachthe environment is a quantum ensemble .probability is introduced by the non - symmetric fluctuations of the environment .the obvious advantage of our approach is that no split between observer and system needs to be postulated .the many body system achieves classicality with a quantum environment .probabilistic behavior in the transition from quantum to classical can thus not be avoided .can one also calculate the corresponding probabilities ? given the structure of hilbert spaces and inner products that we have assumed from the outset this is indeed possible provided one makes some assumption about the nature of the interaction of the system and the environment .details can be found in .in this article we have argued that the measurement problem in quantum mechanics can be traced to the appearance of a classical world in the initial formulation of the theory .if instead a definition of classicality in quantum mechanical terms is adopted the measurement problem can be resolved in the linear setup .the resolution of the problem does not require the introduction of a split between the observer and the object .probabilities arise because the nature of the transition from quantum to classical amplifies the fluctuations in the environment .probabilities thus have the same origin in quantum mechanics as elsewhere in physics : the lack of knowledge .the view we are proposing here relies on reasoning common in statistical physics .the world is as it is because this is the most likely way for it to be .the situation in quantum mechanics can be compared to the situation it statistical mechanics at the beginning of the last century . there existed a conceptual tension between the second law of thermodynamics which stated that the entropy of a system never decreases and poincar recurrence which implied that a system always returns to its initial conditions .no function could be both monotonically increasing and periodic .the solution to this problem was given by the ehrenfests who argued that it was overwhelmingly unlikely for the system to return to its initial state .we have proposed a similar statistical solution to the measurement problem .the occurrence of superpositions of classical objects is not impossible but just very unlikely .john bell called a solution of this kind a solution for all practical purposes and introduced the shorthand fapp for it .it is clear that he intended it as a derogatory term .he was after a more fundamental explanation .we have argued that there does not exist a more fundamental solution then the one we have proposed here .we will have to do with a solution fapp .this view of the foundations of quantum mechanics also sheds light onto the search for a quantum theory of gravity .one lesson is : do not quantize .if classicality arises as it is proposed here then the more fundamental quantum theory is not obtained by quantizing a classical limit .the search for quantum gravity should thus not start with the classical theory and try to quantize it .instead one should start with a purely quantum theory .the construction of quantum theories without a classical theory to start with has begun only recently .examples are the theories constructed by x .-wen and s. lloyd .a second lesson is that certain concepts that make sense in a classical world might not have a meaning in the more fundamental quantum world .concepts like locality , position , and momentum might only be strictly applicable and meaningful in a classical context .the author would like to thank lee smolin , chris beetle , chris isham , fay dowker , fotini markopoulou , hans westman , joy christian , and lucien hardy for comments and discussions .part of the work presented in this article was carried out at the perimeter institute for theoretical physics .the author is grateful for the institutes hospitality .this work was presented at the dice 2006 conference in piombino .the author thanks thomas elze for organizing the workshop and the opportunity to speak . 9dreyer o 2006 emergent probabilities in quantum mechanics _quant - ph/0603202 .ehrenfest p and ehrenfest t 2002 _ the conceptual foundations of the statistical approach in mechanics _ ( dover ) .bell j s 1990 _ phys .world _ * 3 * 33 .this article is reprinted in bell j s 2004 _ speakable and unspeakable in quantum mechanics _ ( cambridge university press ) .wen x - g 2004 _ quantum field theory of many - body systems _( oxford university press ) .lloyd s 2005 the computational universe : quantum gravity from quantum computation _ preprint _ quant - ph/0501135 . | in this article we propose a solution to the measurement problem in quantum mechanics . we point out that the measurement problem can be traced to an a priori notion of classicality in the formulation of quantum mechanics . if this notion of classicality is dropped and instead classicality is defined in purely quantum mechanical terms the measurement problem can be avoided . we give such a definition of classicality . it identifies classicality as a property of large quantum system . we show how the probabilistic nature of quantum mechanics is a result of this notion of classicality . we also comment on what the implications of this view are for the search of a quantum theory of gravity . |
neurons in the brain are connected with each other and send short electrical pulses ( action potentials or spikes ) along those connections . despite the fact that there are correlations between the type of connections and the type of neurons , it is fair to say that neurons fall essentially into two classes , excitatory and inhibitory , and that the connectivity in a local population of several thousand cortical neurons is close to random .networks with fixed random connectivity can , in principle , contain loops of varying size , which could sustain the flow of transient information signals over times that are long compared to the intrinsic time constants of the network elements , i.e. , the neurons . in neuroscience and related fields , elementary considerations on information flow in random networkshave inspired ideas as diverse as synfire chain activity , reverberations , liquid computing , echo state machines , or computing at the edge of chaos .on the other hand , random networks have also been studied intensively by the physics community , in the context of diluted spin glasses , formal neural networks or automata and limiting cases have been identified for which exact solutions are known . in particular , in the limit of asymmetric networks with low connectivity , mean - field dynamics becomes exact .more recently these approaches have been extended to the case of random networks of spiking neurons in continuous time . in this paper , we will compare simulations of a random network of excitatory and inhibitory neurons with the mean - field solutions valid in the low - connectivity limit and evaluate the performance of such networks on a simple information buffering task that can be seen as a minimal and necessary requirement for more complex computational tasks which a neural network might have to solve .more precisely , the task consists in reconstructing a time - dependent input by reading out the activity of the network at a later time .we will see that performance in the information buffering task is best at the phase transition that is marked by a rapid increase in both the _ macroscopic _ activity variable and the lyapunov exponent characterizing the _ microscopic _ state indicating transition to chaos .surprisingly , if the same time - dependent input is shared by all neurons in the network , an information readout based only on the macroscopic variable is as good as a readout that is based on the output pulses of all neurons .however , if the input is only given to a small group of neurons a detailed readout conveys more information than a macroscopic one suggesting that loops in the network connectivity might indeed play a role .we consider a network of leaky integrate - and - fire units ( neurons ) with fixed random connectivity .80 percent of the neurons are taken as excitatory and the remaining 20 percent inhibitory .independent of the network size ( ) , each neuron in our simulation receives input from excitatory and inhibitory units ( presynaptic neurons ) , which are chosen at random amongst the other neurons in the network .the ensemble of neurons that are presynaptic to neuron is denoted by and the efficacy of a connection from a presynaptic neuron to a postsynaptic neuron is if is excitatory , and if is inhibitory .each neuron is described by a linear equation combined with a threshold . in the subthreshold regimethe membrane potential follows the differential equation where is the effective membrane time constant and external input .the recurrent input neuron receives from the network is where is the time neuron fires its spike and is a short transmission delay . a spike from an excitatory ( inhibitory )neuron causes a jump in the membrane potential of neuron of ( ) .if the membrane potential reaches a threshold , a spike of neuron is recorded and its membrane potential is reset to .integration restarts after an absolute refractory period of .the external input can be separated into two components .first , we inject a time dependent test signal which is generated as follows : the total simulation time is broken into segments of duration . during each segment of length input is kept constant . at the transition to the next segment, a new value of is drawn from a uniform distribution over the interval ] .parameters were optimized using a first simulation ( learning set ) lasting 100 seconds ( 100000 time steps of simulation ) and were kept fixed afterwards .the performance measurements reported in this paper are then evaluated on a second simulation of 100 seconds ( test set ) .simulation results were obtained using the simulation software nest .the same time - dependent signal was injected into all neurons in the network and the performance evaluated in terms of the signal reconstruction error .the performance depends on the delay of information buffering which has to be compared with the membrane time constant ( ) and the autocorrelation of the input .overall the signal reconstruction error is relatively high .as expected , the signal reconstruction error increases if we increase the desired buffer duration from to or ( fig .[ fig - delays]a ) . as a function of the background firing rate in a network of 800 neurons for three different information buffer delays : ( solid ) , ( dashed ) , ( dotted ) . for sufficiently long delays ,optimal performance is located near the transition between the quiescent and the chaotic state ; cf fig [ fig - transition ] .deeper in the chaotic phase the error goes back to the chance level whereas in the almost quiescent regime we can see the effects of overfitting ( ) , because the number of action potentials is insufficient to build an accurate model of the past events .b. comparison of the errors for different network sizes : neurons ( resp .dotted , dashed and solid lines ) , for a delay .the solid line with filled circles corresponds to the _ macroscopic _ readout of the network of neurons .the location of minimal error is independent of the number of neurons and coincides with the phase transition .the vertical shift of the error curves is not significant but due to overfitting because of limited amount of data .vertical bars indicate the mean difference between errors on the data used for parameter optimisation ( training set ) and that on an independent test set for ( left bar ) , ( 2nd bar ) , ( 3rd bar ) and macroscopic readout ( right bar ) .a representative curve of errors on the training set for neurons is shown by the dot - dashed line . ] at the same time , the optimal background firing rate to achieve minimal signal reconstruction error shifts towards lower values and is for very close to the transition between regular and chaotic microscopic dynamics as shown in fig .[ fig - transition ] .this result is consistent with the idea of computing at the edge of chaos in cellular automata or threshold gates . also ,similar to the results in discrete - time spin networks , the information buffering performance does not significantly depend on the number of neurons in the network ; cf .[ fig - delays]b .differences are within the statistical variations caused by overfitting on finite data samples .given that networks states have been classified successfully by macroscopic mean - field equations , we wondered whether the performance in the above information buffering task can be completely understood in terms of _ macroscopic _ quantities . to answer this question, we compared the performance using the previous readout unit ( i.e. , a ` microscopic ' readout with free parameters , one per neuron plus an offset ) with that by a simplified readout with two free parameters and only , , i.e. , a readout that uses only the macroscopic population activity .surprisingly , for a stimulation paradigm where all neurons receive the same time - dependent signal the macroscopic readout performs as well as the microscopic one . in other words ,connectivity loops between _ specific _ subsets of neurons where signals could circulate for some time seem not to play a role in information buffering .this suggests that , for signals of sufficiently small amplitude , the information buffering capacity is directly related to the _ macroscopic _ linear response kernel of the network activity , that can , in principle , be calculated from the linearized mean - field equations of the population rate , i.e. , ; cf .the time constant of the kernel , and hence information buffering delays , become large in the vicinity of a phase transition .we hypothesized that signal transmission loops in our randomly connected network could manifest themselves more easily if only a small subset of neurons received the input signal .we therefore selected 20 percent of neurons at random ( group ) and injected an identical time dependent signal into all neurons .the remaining 80 percent of neurons ( group ) received no signal . in such a network consisting of two groups , signal buffering performance is indeed significantly better than in a single homogeneous group ( fig .[ fig - twogroups ] ) . on a macroscopic scale, a network consisting of two groups and can be described by two macroscopic variables , i.e. , the population activities in groups and . in order to evaluate the information contained in the macroscopic population rates, we used a linear readout unit with three free parameters , and , characterized by the differential equation and proceeded as before . as we can see from fig .[ fig - twogroups ] , a readout based on the macroscopic activity of the two groups performs significantly worse than the microscopic readout .this suggests , that for the case when only a small subset of units in a random network receive an input , signal transmission loops , and hence microscopic neuronal dynamics , indeed play a role in short - term information buffering . of the neurons only . a macroscopic readout assuming a single population ( dotted line ) performs well near the phase transition .however deeper in the chaotic phase it is outperformed by the _ microscopic _ readout ( solid line ) .a macroscopic readout based on a two - population assumption ( dashed line ) explains only part of the increased performance .the signal buffering delay for this figure is . ]mean - field methods neglect correlations in the input . in random networksmean - field theory becomes asymptotically correct only in the low - connectivity limit where the probability of closed loops tends to zero .however , it is exactly these loops which could give the network the power to buffer information for times significantly longer than the intrinsic time constants of the network elements .our network is formally not in the low - connectivity limit since the number of neurons is small .nevertheless , we found that mean - field results can qualitatively predict the rough location of the phase transition of the macroscopic population rate .moreover , if the input signal is shared between all neurons , a macroscopic readout is sufficient to explain the network performance in an information buffering task .microscopic properties do , however , play a role if the input is only given to a subset of the network units suggesting that in this case ultra - short term information buffering in connectivity loops is indeed possible . in additional simulations we checked that the maximum delay for which signal reconstruction is feasible is only in the range of 20 - 50ms and hence not significantly different from the intrinsic neuronal time constants .this suggests that , without slow processes such as synaptic plasticity or neuronal adaptation a purely random network of spiking neurons is not suitable as an information buffer beyond tens of milliseconds . | in randomly connected networks of pulse - coupled elements a time - dependent input signal can be buffered over a short time . we studied the signal buffering properties in simulated networks as a function of the networks state , characterized by both the lyapunov exponent of the microscopic dynamics and the macroscopic activity derived from mean - field theory . if all network elements receive the same signal , signal buffering over delays comparable to the intrinsic time constant of the network elements can be explained by macroscopic properties and works best at the phase transition to chaos . however , if only 20 percent of the network units receive a common time - dependent signal , signal buffering properties improve and can no longer be attributed to the macroscopic dynamics . |
collective decision making has been evidenced in many animal species and contexts including food collection , problem - solving , collective movement or nest site selection . in this later case ,social animals have to select a resting sites among severals potential options in a complex environment .this selection can be made either through individual decisions or complex decision - making processes involving the participation of all individuals and can be temporary or permanent according to the needs and living style of the considered species . while this process of collective decision has been studied for a long time in social animals that select a permanent home ( social insects for example ), only few studies address this problem in the case of nomad species that constantly explore their environment such as birds or fish .in particular , experiments on fish have been generally designed to observe preferences for particular environmental features or landmarks during a relatively short experimental time ( few seconds , 5 minutes , 10 minutes , up to 30 minutes per trial ) .these studies have shown for example that landmarks in a bare tank arouse interest and attract the fish and that variations of the shape of these landmarks can change territory features . while these studies provide numerous insights on the individual and collective preferences in fish groups , they generally rely on the comparison between two or more qualitatively different alternatives .thus , the selection of one option is often based on an intrinsic preference for a particular feature in comparison with the others .such asymmetric choices may hide the collective decision that results from the internal processes of decision - making of the group . here, our aim is to characterise the collective behaviour of groups of zebrafish swimming in an environment with identical landmarks .we observe the collective motion of small shoals of different group sizes ( 5 and 10 fish ) and of two different zebrafish strains ( wild type ab or tl ) .zebrafish are gregarious vertebrate model organisms to study the cohesion of the group and its decision making .originating from india where they live in small groups in shallow freshwaters , the zebrafish is a diurnal species that prefers staying in groups both in nature and in the laboratory . to be closer to their natural swimming habits, our experimental method relies on the observation of fish freely swimming in an open environment rather than in a constraining set - up ( i.e. maze as used in ) .we observe during one hour each group of fish swimming in a large experimental tank with two spots of interest ( landmarks ) .the landmarks consist of two striped yellow - green opaque plastic cylinders placed in the water column or two blue transparent floating perspex disks providing shadow .we choose these colours in the visible spectrum of the zebrafish according to the results of .we expect that these landmarks placed in a homogeneous environment could induce a choice of one prefered option by the zebrafish as evidenced for other species faced with identical ressources . on the one hand , we test with cylinders the effect of visual and physical cues in the water column on collective choices .since zebrafish are known to swim along the walls of experimental tanks , cylinders could act as such walls in the water column . on the other hand , we test with floating disks the impact of visual and physical cues above water , on collective choices .we placed disks and cylinders landmarks to see whether and how the two strains of zebrafish will adapt their group behaviour and their preferences for landmarks .we quantified behavioural properties of groups of zebrafish from high resolution videos with the help of a tracking system and an automated analysis . following the method of , we track in real time the positions of the fish and automatically compute their probabilities of presence in the tank , their interindividual distances , the number of individual present near the landmarks as well as the duration of their visits .then , we compare the distributions obtained for each group size and landmark to highlight differences between strains of zebrafish at the collective level .this methodology based on massive data gathering has now become standard in studies on animal collective behaviour with flies , _ drosophila melanogaster _ , birds , _ sturnus vulgaris _ and fish , _ notemigonus crysoleucas _ .in the presence of two striped yellow and green cylinders ( see methods ) , the groups of 10 ab zebrafish are mainly present along the wall of the tank and around the cylinders , as shown by their average probability of presence computed for the one hour observation period and 10 replicates ( figure [ fig : all_pdfcylinders ] ) . on the contrary , groups of 5 tl , 10 tl and 5 ab zebrafishare less observed near the cylinders but are still present along the walls of the tank ( figure [ fig : all_pdfcylinders ] ) .the probabilities of presence computed for each experiment are shown in the annexe ( figure [ fig : figures1 ] , figure [ fig : figures2 ] , figure [ fig : figures3 ] and figure [ fig : figures4 ] ) .= 10 measures for 5 ab , 10 ab , 5 tl and 10 tl .* = , * * = , * * * = , ns = non significant.,scaledwidth=50.0% ] to highlight the differential effect of the cylinders on the tested groups , we measured the proportion of positions that were detected at 25 cm from the centre of the cylinders ( figure [ fig : anova_cylinders_strains_size ] ) .this measure confirms that groups of 10 ab zebrafish were more present near the cylinders than groups of 5 ab .in contrast , while groups of 5 tl responded similarly to groups of 5 ab , groups of 10 tl zebrafish were less detected near the cylinders .a two - way anova ( group size , strain , for each experimental conditions ) indicated a non - significative effect of the group size ( , f = 3.59 , df = 1 ) but a significant effect of the strain ( , f = 43.17 , df = 1 ) and a significant interaction strain / group size ( , f = 35.38 , df = 1 ) on the attractivity of the cylinders .the interaction effect indicates here a strain - specific effect of the group size on the time spent near the landmarks : groups of 10 ab are more attracted by the cylinders than groups of 5 ab but on the contrary , 10 tl are less detected near the cylinders than groups of 5 tl .then , we studied in more details the dynamics of the presence near the cylinders of the zebrafish swimming in group of 10 individuals . in the ab strain , the fish form a cohesive group that regularly transits from one landmark to the other at the beginning of the trial .then , the group starts to split in multiple subgroups and the periodicity of the visits becomes less regular ( figure [ fig : cylindersexploit_temporal]a ) .these oscillations are also observed for groups of the tl strain but contrarily to ab zebrafish , this phenomenon is observed for the whole experimental time ( figure [ fig : cylindersexploit_temporal]b ) . to quantify the dynamics of these transitions , we computed the number of majority events detected near one of the two landmarks or outside of them .a majority event was counted when 7 or more individuals were simultaneously present in the same zone independently of the duration of this majority event .figure [ fig : cylindersexploit_symbdyn]a shows that the median and mean number of majority event is always smaller in the tl strain , but this difference is only significant for the majorities detected near one of the cylinder ( cylinder 1 , mann - whitney u test , u = 26 , ; cylinder 2 , mann - whitney u test , u = 48 , ; outside , mann - whitney u test , u = 38 , ) .then , we characterised the transitions of the fish from one landmark to the other by analysing the succession of majority events .in particular , we counted the number of `` collective transitions '' ( two majority events nearby different cylinders separated by a majority outside ) , the number of `` one - by - one transitions '' ( succession of a majority event in one cylinder and a majority event in the other cylinder ) and finally the number of `` collective u - turns '' ( two majority events in the same cylinder separated by a majority outside ) .this reveals that the main transitions occurring for ab zebrafish are the collective ones while some collective u - turns and almost no individual transitions were detected ( figure [ fig : cylindersexploit_symbdyn]b red ) .similarly , almost no individual transition were observed for the tl zebrafish that perform mainly collective transitions ( figure [ fig : cylindersexploit_symbdyn]b green ) .tl zebrafish performed also numerous collective u - turns .the absence of individual transitions reveals that both strains are mostly swimming in groups but with different collective dynamics .we compared the results with mann - whitney u tests : `` one - by - one transitions '' ( u = 3.0 , ) and `` collective u - turns '' ( u = 27.5 , ) between ab and tl are significantly different when `` collective transitions '' between ab and tl are not ( u = 40 , ) . finally , we computed the interindividual distances between all fish to characterize the cohesion of the group for both strains and group sizes .the distribution of all interindividual distances ( figure [ fig : all_interdistdistricylinders ] ) shows that groups of tl zebrafish have a stronger cohesion ( 5 tl : median = 0.12 m and 10 tl : median = 0.14 m ) than the groups of ab zebrafish ( 5 ab : median = 0.27 m and 10 ab : median = 0.23 m ) .the intra - strain comparison for the two group sizes shows that the distribution significantly differs from each other ( kolmogorov - smirnov test , 5 ab vs 10 ab , d = 0.102 , ; 5 tl vs 10 tl , d = 0.080 , ) .the inter - strain comparison for similar group sizes also reveals a statistical difference between the distributions ( kolmogorov - smirnov test , 10 ab vs 10 tl , d = 0.184 , ; 5 ab vs 5 tl , d = 0.161 , ) .the distributions of the average interindividual distance measured at each time step confirms these results ( figure [ fig : all_averageinterdistdistibutioncylinders ] , kolmogorov - smirnov test , 5 ab vs 10 ab , d = 0.433 , ; 5 tl vs 10 tl , d = 0.051 , ; 10 ab vs 10 tl , d = 0.333 , ; 5 ab vs 5 tl , d = 0.464 , . in addition , the analysis of the evolution of the average interindividual distance revealed that the cohesion of the fish decreases for both strains and both group sizes through the time ( figure [ fig : figures11 ] of the annexe ) .we also placed symmetrically two floating identical perspex transparent blue disks , at 25 cm from two opposite corners along the diagonal .acting as roofs on the water , they create shades that could attract zebrafish .we did 10 trials on groups of 10 ab and 10 tl zebrafish .we observed similar behaviours than in the previous experiments with cylinders .the maximum probability of presence under disks with ab zebrafish reaches 4.5 when for tl zebrafish it reaches only 1 ( figure [ fig : all_pdfdisks ] ) .again , tl zebrafish spent the majority of their time near the borders of the tank ( the probabilities of presence for each experiment are shown in the figure [ fig : figures5 ] and figure [ fig : figures6 ] of the annexe ) .we compared the probability to be near the spots for both strains and both types of landmarks with a two - way anova ( figure [ fig : anova_cylinders_disks_strains_size ] ) .it revealed that the type of landmarks ( , f = 11.37 , df = 1 ) and the strain of zebrafish affect the attraction ( , f = 102.95 , df = 1 ) while there is also an evidence of an interaction effect between type of landmarks and strains ( , f = 5.67 , df = 1 ) .thus , while groups of 10 ab are attracted by cylinders as much as by disks , groups of 10 tl are more attracted by cylinders than disks . eventually , tl zebrafish were also significantly more cohesive than the ab zebrafish in the presence of the floating disks , as shown by the distribution of the interindividual distances ( medians for 10 ab : 0.35 m , for 10 tl : 0.23 m , figure [ fig : all_interdistdistridisks ] , kolmogorov - smirnov test , d = 0.135 and ) ., * * = , * * * = , ns = non significant.,scaledwidth=50.0% ]we investigated whether collective motion and collective choice can differ in groups of 5 and 10 individuals of the same species ( zebrafish _ danio rerio _ ) but of different strains ( ab versus tl ) .both strains were laboratory wild types , had the same age and were raised under the same conditions .we observed zebrafish swimming for one hour in the presence of two identical landmarks ( immersed cylinders or floating disks ) that were put symmetrically in the tank .although one hour observations show that the zebrafish do not select one of the two landmarks , the fish were mainly swimming together and oscillate from one landmark to the other with short resting times .thus , while all individuals can be punctually present at the same landmark ( figure [ fig : cylindersexploit_temporal ] , figure [ fig : figures7 ] and figure [ fig : figures8 ] of the annexe ) , the probability of presence computed for the entire experimental time shows that the fish were equally present at both stimuli .therefore , no collective choice emerged on the long time for both strains of zebrafish and group size .hence , long time and short time range analyses reveal opposite results and conclusions on collective motions .our methodology is complementary to typical y - maze experiments .we extend and compare their conclusions to our observations with repeated interactions between the fish and their environment . during a hour ,the collective behaviour of zebrafish contrasts with other collective species in which spatial fidelity emerges from the interactions between the individuals that take place in the resting sites ( in cockroaches , in hymenoptera ) .these oscillations from one site to the other could originate from individual differences among group members : _ bold _ and _ shy _ behavioural profiles have been evidenced in zebrafish according to the intrinsic propensity of each fish to explore new environments .it also has been identified in other fish species . in this context , the presence of bolder fish in the group could favour the transition from one spot to the other while groups composed only by shy individuals could show less frequent departures .a more detailed analysis show quantitative differences among the two studied strains and group sizes .concerning the response of the fish to the landmarks , we highlight that groups of 5 ab and tl zebrafish show the same attraction for the cylinders by computing the probability for the fish to be observed near these landmarks .this attraction increases for groups of 10 ab zebrafish but decreases for groups of 10 tl zebrafish .this strain difference is also observed in the experiments with floating disks .in addition , the type of the landmarks seems to be determinant for tl zebrafish as they prefer objects immersed in the water column than objects lying on the surface of the water .hence , it exists a difference of collective behaviour between the two tested strains of zebrafish .this different response to heterogeneities may be based on the intrinsic preference of the fish of a particular strain for congeners or for landmarks .such difference has already been shown in shoaling tendency between several strains of guppies and zebrafish , . in their natural environment ,fish have to balance the costs of risks and benefits of moving in groups or staying near landmarks . moving in groups first of allprevents the fish to be static preys , and second allow spatial recognition and easier food and predators detection .the drawback is that the takeover of the fish on the territory is punctual and they have less chance to find areas where they can hide .staying around landmarks gives the fish a feeling of control of the territory and the possibility to hide from predators . in that casethe drawback is that the preys will rarely cross the territory of the fish . regarding the structure of the group , we notice that whatever the group size of tl zebrafish , the cohesion of the group does not change and is always stronger than those of groups of ab zebrafish .also , the bigger the group of ab zebrafish , the stronger the cohesion .these differences of group cohesion may be based on physical features differences between ab and tl zebrafish .cohesion differences could be explained by phenotype differences between the two strains .some studies demonstrated that a large variability exists in the individual motion and shoaling tendency of the zebrafish according to their age or strain .for example , adults ab and casper zebrafish swim longer distance than abstrg , ek , tu or wik zebrafish .likewise , the interindividual distance between shoal members decreases from 16 body length to 3.5 body length between day 7 and 5 months after fertilization .also , it has been demonstrated that the fin size has an impact on the swimming performance and behaviour of the zebrafish .starting from the assumptions that ab and tl zebrafish have the same visual acuity and the same lateral line performance , the only source of separation between both strains are their fin lengths and the patterns on the skin : tl zebrafish are homozygous for leo and lof , where leo is a recessive mutation causing spotting in adult zebrafish and lof is a dominant homozygous viable mutation causing long fins , .thus , ab zebrafish have short fins and tl zebrafish show long fins .it may suggest that tl zebrafish move a higher quantity of water with their long fins when swimming and thus emit a stronger signal of presence ( hydrodynamical signal ) .thus , it may be easier for conspecifics in the moving shoal to perceive the signal through their lateral line and realign themselves according to their conspecifics . if the realignment becomes easier, it is simpler for tl zebrafish to keep their position in the shoal , which increases its cohesion . following a similar hypothesis ,the signal of presence is weaker for ab zebrafish due to their shorter fins .thus , realignment in the moving shoal is less performant and their cohesion decreases .each of the two hypotheses could explain the collective behaviours observed during the experiments and nothing prevents merging both of them . in conclusion, this study demonstrates that behavioural differences exist at the individual and collective levels in the same species of animal .the analysis of the dynamics reveals that ab and tl zebrafish mainly oscillate in groups between landmarks .in addition , increasing the size of the group leads to opposite results for the two strains : groups of 10 ab zebrafish are proportionally more detected near the landmarks than groups of 5 ab while groups of 10 tl zebrafish are less attracted by the landmarks than groups of 5 tl .finally , the two tested zebrafish strains show differences at the structural level : ( 1 ) groups of tl zebrafish are more cohesive than groups of ab zebrafish and ( 2 ) ab zebrafish collective responses to landmarks show that they are generally more present near the cylinders and floating disks than tl zebrafish .thus , this study provides evidences that zebrafish do not select resting site on the midterm and highlights behavioural differences at the individual and collective levels among the two tested strains of zebrafish .future studies of collective behaviour should consider the tested strains , the intra - strain composition of the shoals and the duration of each trial .the experiments performed in this study were conducted under the authorisation of the buffon ethical committee ( registered to the french national ethical committee for animal experiments # 40 ) after submission to the state ethical board for animal experiments .we acquired 500 adult wild - type zebrafish ( 200 ab strain and 300 tl strain ) from institut curie ( paris ) and raised them in tanks of 60l by groups of 50 .the zebrafish ab line show a zebra skin , short tail and fin .the zebrafish tl line show a spotted skin , long tail and fin and barbel .both strains are 3.5 cm long .the zebrafish used for the experiments are adult fish between 5 months and 18 months of age . during this period ,zebrafish show a shoaling tendency allowing study of their collective behaviours .we kept fish under laboratory condition , , 500 salinity with a 9:15 day : night light cycle .the fish were reared in 55 litres tanks and were fed two times per day ( special diets services sds-400 scientic fish food ) .water ph is maintained at 7.5 and nitrites ( no ) are below 0.3 mg / l .we measured the size of the caudal fins of 10 ab ( about 0.4 cm ) and 10 tl ( about 1.1 cm ) zebrafish .the experimental tank consists in a 1.2 m x 1.2 m tank confined in a 2 m x 2 m x 2.35 m experimental area surrounded by a white sheet , in order to isolate the experiments and homogenise luminosity .the wall of the experimental tank were covered with white tape and the water column is 6 cm .water ph is maintained at 7.5 and nitrites ( no ) are below 0.3 mg / l . the experiments with disks were performed in the experimental tank while those with cylinders were performed in a white square arena ( 1 m x 1 m x 0.15 m ) placed in the experimental tank .groups of zebrafish were randomly formed at the beginning of the experiments .the experiments were recorded by a high resolution camera ( 2048 x 2048 px , basler scout aca2040 - 25gm ) placed above the experimental tank and recording at 15 fps ( frame per second ) .luminosity is ensured by 4 fluorescents lamps of 80w placed on each side of the tank , on the floor and directed towards the walls to provide indirect lightning . to trigger interest of fish, we placed symmetrically in the set - up either two floating disks ( = 20 cm ) or two cylinders ( = 10 cm , height = 15 cm ) surrounded by yellow and green striped tape . to avoid the presence of a blind zone ,the cylinders were slightly tilted toward the centre of the tank .the center of both disks and cylinders are at 25 cm from two opposite corners along the diagonal of the tank ( figure [ fig : set - up ] ) .we recorded the behaviour of zebrafish swimming in the experimental tank during one hour . before the trials ,the attractive landmarks are put in the setup and fish are placed with a hand net in a cylindrical arena ( 20 cm diameter ) made of plexiglas placed in the centre of our tank . following a five minutes acclimatisation period ,this arena is removed and the fish are able to freely swim in the experimental arena .we performed 10 trials for each strain with the floating disks and 10 trials for each combination of parameters ( number of fish x strain ) with the cylinders for a total of 60 experiments .each fish was never tested twice in the same experimental condition .the experiments with cylinders were recorded at 15 fps and tracked online by a custom made tracking system based on blob detection .we call a batch a group of 10 experiments . for these batchs ,each experiment consists of 540000 positions ( 10 zebrafish x 54000 frames ) and 270000 positions ( 5 zebrafish x 54000 frames ) . for experiments with disks , we faced tracking troubles .since the fish below the floating disks were difficult to distinguish by the program due to a lack of sufficient contrast , experiments with floating disk were tracked offline by two custom matlab scripts .a first script automatically identifies the positions of the fish swimming outside of the floating disks by blob detection .since this method did not allow a perfect detection of all the individuals , we developed a second script that was run after the first one and that plotted the frame where a fish ( or more ) was undetected for the user to manually identify the missing individual(s ) .it allowed us to identify the fish that were partially hidden during a collision / superposition with another fish or the fish that were situated under the floating disks . since this analysis tool is time - costly , we only analysed 1 fps for all experiments with disks . for these batchs ,each experiment consists of 36000 positions ( 10 zebrafish x 3600 positions ) .since our tracking system did not solve collision with accuracy , we did not calculate individual measures but characterised the aggregation level of the group .the probability of presence of the fish was calculated by the cumulated positions of all individuals along the entire experiment .we also calculated the distance between the fish and the attractive landmarks as well as the inter - individual distances between the fish and the average inter - individual distance .finally , we computed the time of shelter occupancy as the time that is spent by the fish at less than 25 cm of the attractive landmarks .these time sequences were calculated according to the number of fish present near the landmarks .all scripts were coded in python using scientific and statistic libraries ( numpy , pylab , scilab and matplotlib ) . to compute the number of majority events ,the number of fish was average over the 15 frames of every second .this operation garanties that a majority event is ended by the departure of a fish and not by an error of detection during one frame by the tracking system .figure [ fig : figures9 ] and figure [ fig : figures10 ] of the annexe show the proportions of the durations of the the majority events before and after this interpolation ..*means of percentages of tracking efficiencies*. [ cols="^,^",options="header " , ] for the figures [ fig : anova_cylinders_strains_size ] and [ fig : anova_cylinders_disks_strains_size ] , 10 measures of means of the probability for different groups of zebrafish to be near the landmarks are ploted .they have been tested using a two - way anova .we then compared the data between each group using a one - way anova and finally used a tukey s honest significant difference criterion post - hoc test .we did these tests on matlab and chose 0.001 as significance level . in the table [fig : efficiency ] , we show the number of majority events after interpolation of the data at 1 fps . this table is related to the figure [ fig : cylindersexploit_symbdyn ] .we used mann - whitney u tests to compare the number of events between strains , areas and transition types .these tests are performed on 10 values of majority events for each strain , area and transition type .these tests were made with the python package scipy .we chose 0.001 ( * * * ) , 0.01 ( * * ) and 0.05 ( * ) as significance levels .for the figure [ fig : all_interdistdistricylinders ] as well as the figures [ fig : all_averageinterdistdistibutioncylinders ] and [ fig : all_interdistdistridisks ] , we compared the distribution with kolmogorov - smirnov tests .these tests were made with the python package scipy .we chose 0.001 as significance level .the authors thank filippo del bene ( institut curie , paris , france ) who provided us the fish observed in the experiments reported in this paper .this work was supported by european union information and communication technologies project assisibf , fp7-ict - fet-601074 .the funders had no role in study design , data collection and analysis , decision to publish , or preparation of the manuscript .supplementary figures of `` strains differences in the collective behaviour of zebrafish ( _ danio rerio _ ) in heterogeneous environment '' .m. ballerini , n. cabibbo , r. candelier , a. cavagna , e. cisbani , i. giardina , v. lecomte , a. orlandi , g. parisi , a. procaccini , m. viale , and v. zdravkovic .interaction ruling animal collective behavior depends on topological rather than metric distance : evidence from a field study ., 105:12321237 , 2007 .a. strandburg - peshkin , c.r .twomey , n.w.f .bode , a.b .kao , y. katz , c.c ioannou , s.b .rosenthal , c.j .torney , h.s .levin , and i.d .visual sensory networks and effective information transfer in animal groups ., 23:r709 , 2013 .s. millot , m. -l .bgout and b. chatain risk - taking behaviour variation over time in sea bass _ dicentrarchus labrax _ : effects of day night alternation , fish phenotypic characteristics and selection for growth . 75:17331749 , 2009 . | recent studies show differences in individual motion and shoaling tendency between strains of the same species . here , we analyse the collective motion and the response to visual stimuli in two morphologically different strains ( tl and ab ) of zebrafish . for both strains , we observe 10 groups of 5 and 10 zebrafish swimming freely in a large experimental tank with two identical attractive landmarks ( cylinders or disks ) for one hour . we track the positions of the fish by an automated tracking method and compute several metrics at the group level . first , the probability of presence shows that both strains avoid free space and are more likely to swim in the vicinity of the walls of the tank and the attractive landmarks . second , the analysis of landmarks occupancy shows that ab zebrafish are more present in their vicinity than tl zebrafish and that both strains regularly transit from one landmark to the other with no preference on the long duration . finally , tl zebrafish show a higher cohesion than ab zebrafish . thus , landmarks and duration of the repicates allow to reveal collective behavioural variabilities among different strains of zebrafish . these results provide a new insight into the need to take into account individual variability of zebrafish strains for studying collective behaviour . |
he internet traffic data , which is usually modeled by the superposition of diverse components , is of critical importance to network management .the baseline traffic represents the most prominent traffic patterns , and is helpful to anomaly detection , capacity planning , and green computing .most traditional studies focused on baselining the single - link traffic , however , as the network - wide traffic analysis is becoming increasingly popular , baselining the network - wide traffic turns into a most urgent problem .the traffic matrix is an important kind of network - wide traffic data and has complex characteristics .the baseline of a traffic matrix should capture the common patterns among traffic flows , and be stable against the disturbance of large anomalies .principal component analysis ( pca ) was used for traffic matrix analysis in , and showed the low - rank nature of the baseline ( i.e. the deterministic ) component , but it performed poorly in the presence of contamination by large anomalies .recently , the robust principal component analysis ( rpca ) theory has attracted wide attentions .supposing a low - rank matrix is contaminated by a sparse matrix whose non - zeros entries may have large magnitudes , candes et al . presented a robust matrix decomposition method named principal component pursuit ( pcp ) , and proved that under surprising board conditions , pcp could recover the low - rank matrix with high probability . following this exact `` low - rank and sparsity '' assumption , bandara et al . proposed the robust baseline ( rbl ) method , and argued that rbl performs better than several existing traffic baseline methods . intuitively , the baseline traffic time - series of each flow should be `` smooth '' enough , i.e. it records the long - term and steady traffic trends ( such as the diurnal pattern ) and neglects the short - term fluctuations .however , to the best of our knowledge , this problem has not been adequately studied when baselining the traffic matrix .actually , the resulting baseline traffic time - series for rbl are not smooth enough , and it can be explained that the empirical traffic matrix does not exactly meet the `` low - rank and sparsity '' assumption , on the contrary , it also contains non - smooth noisy fluctuations which should be decomposed specially .furthermore , the evaluation of traffic baseline methods was not sufficient in previous studies .a key point is that obtaining the ground - truth baseline of real - world traffic data is usually impossible , consequently , one could neither measure the accuracy of a baseline method , nor compare the performances of different baseline methods . therefore, baselining network - wide traffic is still a challenging task . in this letter , we first establish a refinement of the traffic matrix decomposition model in , which extends the assumption of deterministic traffic matrix to characterize the smoothness of baseline traffic .secondly , we propose a novel baseline method for the traffic matrix data , which solves a constrained convex program named stable principal component pursuit with time - frequency constraints ( spcp - tfc ) . as an extension of the stable principal component pursuit ( spcp ) method , spcp - tfc takes distinctive time - frequency constraints for our refined model .thirdly , we design the accelerate proximal gradient ( apg ) algorithm for spcp - tfc which has a fast convergence rate . atlast , we evaluate our traffic baseline method by abundant simulations , and show it has a superior performance than rbl and pca .suppose is a traffic matrix , and each column ( ) is a traffic flow in time intervals .a traffic flow is the traffic of a original - destination ( od ) pair , or the traffic traversing a unidirectional backbone link . in , we proposed the simple traffic matrix decomposition model ( tmdm ) , assuming is the sum of a low - rank matrix , a sparse matrix , and a noise matrix .this model is equivalent to the data model of the generalized rpca problem , and the low - rank deterministic traffic matrix corresponds to the baseline traffic .however , tmdm gives no temporal characteristics of the baseline traffic .since each baseline traffic flow records the long - term and steady traffic trends , it should display a smooth curve .plenty of mathematical tools , such as wavelets and splines , can formulate smoothness .as the most salient baseline traffic patterns are slow oscillation behaviors , this letter formulates the baseline traffic time - series as the sum of harmonics with low frequencies , and establishes a refined traffic matrix decomposition model ( r - tmdm ) : * definition 1 * ( r - tmdm ) _ the traffic matrix is the superposition of the deterministic ( baseline ) traffic matrix , the anomaly traffic matrix , and the noise traffic matrix . is a low - rank matrix , and for each column time - series in , the fourier spectra whose frequencies exceed a critical value are zeros ; is a sparse matrix , but the non - zeros entries may have large magnitudes ; is a random noise matrix , and each column time - series is a zero - mean stationary random process . _ for simplicity , we assume each column ( ) is the white gaussian noise with variance in this letter. stable principal component pursuit ( spcp ) for the generalized rpca problem solves this convex program : where , , denote the nuclear norm , the norm , and the frobenius norm , respectively ; is the balance parameter , and is the constraint parameter . to extract the baseline traffic more powerfully , we extend spcp by preserving the objective function and redesigning the constraint functions .firstly , considering the r - tmdm model , it is necessary to add a constraint for the baseline traffic matrix based on its frequency - domain assumption .{t \times t} ] and it satisifies (i , j ) \triangleq \mathrm{sign}(x(i , j))\max\{|x(i , j)|-\epsilon,0\}.\ ] ] * input * : traffic matrix .* normalization : * .* initialization : * ; ; . ; ; . ; .* while * not converged * do * ; ; ; {k}^{a}+y_{k}^{e}+y_{k}^{n}-x\right) ] ; // singular value decomposition .^{\top} ] ; ] , and the amplitudes of the _ sin _ functions decay quickly as increases by = .( 2 ) the non - zero entries are randomly distributed in positions of the anomaly traffic matrix , and for each flow , the volume of each anomaly is .( 3 ) for each flow , the noise traffic consists of independent gaussian variables , where , controls the noise rate .we simulate baseline traffic matrices simultaneously , and generate one anomaly traffic matrix and two noise traffic matrices for each of them ( is chosen as and , which means a low - level noise and a high - level noise , respectively ) .therefore , simulated traffic matrices are used in experiment .it can be verified that these matrices satisfy the r - tmdm model , and the critical frequency = .spcp - tfc is compared with other two network - wide traffic baseline methods : rbl and pca .we apply these methods to the simulation dataset , and evaluate them by three metrics . [ 0.5 ] = ; right group : = ., title="fig : " ] the first metric is the _normalized root mean squared error _ ( nrmse ) between the ground - truth baseline traffic matrix and its estimation : fig .[ nrmse ] illustrates the nrmses of different baseline methods under two noise levels . for spcp - tfc, we test different values of parameter .it can be observed that , spcp - tfc archives significant lower nrmses than rbl and pca , and this result is stable in a large range of values . and our traffic baseline method shows the best performance when = : under the low ( high ) noise level , the median of nrmses for spcp - tfc is as as that for rbl , and is as as that for pca .hence we only consider spcp - tfc with this fixed value in the following discussions .[ 0.43 ] = ; right panel : = .,title="fig : " ] the second metric characterizes the temporal correlation between each pair of ground - truth and resulting baseline traffic flows using their _ pearson correlation coefficient_. as a traffic matrix contains 100 flows , under each noise level , we compute = correlation coefficients for each baseline method .we display boxplots of these coefficients in fig .[ correlation ] .it is shown that the correlation coefficients for spcp - tfc are very close to , and have an obviously greater median than rbl and pca .thus the spcp - tfc method extracts the temporal characteristics of the baseline traffic more precisely .[ 0.43 ] [ 0.43 ] .a comparison between different baseline methods on the normalized mean smoothness values [ cols="<,<,<,<,<",options="header " , ] the third metric characterizes baseline traffic on smoothness , which adds up the _ total variations _ of the traffic flows in baseline traffic matrix .a small value of this metric indicates the traffic is sufficient smooth . applying this metric to the baseline results aforementioned, we display the curve of 100 smoothness values ( arranged by traffic matrix i d ) for each baseline method , and for the 100 ground - truth values , in fig .[ smoothness ] .we then compute the mean smoothness of each baseline method , and normalize it by dividing the mean smoothness of the ground - truth baseline traffic matrices . these normalized mean smoothness values( under two noise levels ) are summarized in tab .[ meansmoothness ] .the estimated baseline traffic matrices by rbl and pca are significantly more coarse than the ground - truth baseline traffic , and their mean smoothness values are larger than three times and eight times of the ground - truth value , respectively .instead , the spcp - tfc method leads to more accurate baseline estimations on smoothness .this is because the high - frequency traffic , which has larger total variation , can be successfully eliminated from the baseline by spcp - tfc . from fig .[ nrmse]-[smoothness ] and tab .[ meansmoothness ] , we can also observe that when the noise rate grows from to , the performance of our baseline method shows a decline : the nrmses raise ,the correlation coefficients drop down , and the smoothness values of resulting baseline traffic diverge from the ground - truth more evidently .this is mainly because the low - frequency component of the noise traffic , which could not be directly distinguished from the baseline traffic , is proportional to .in this letter , we propose a refined traffic matrix decomposition model , and introduce a novel baseline method for traffic matrix named spcp - tfc , which extends spcp by using new constraint functions for the baseline traffic and the noise traffic , respectively .we design an apg algorithm for spcp - tfc , whose convergence rate is .our baseline method is demonstrated by abundant simulations . using three distinct metrics to evaluate the baseline results , we show spcp - tfc has a superior performance than rbl and pca . 1 a. lakhina , k. papagiannaki , m. crovella , c. diot , e. d. kolaczyk , and n. taft . structural analysis of network traffic flows .sigmetrics perform .61 - 72 , june 2004 .v. w. bandara and a. p. jayasumana .extracting baseline patterns in internet traffic using robust principal components .ieee lcn , 2011 .z. wang , k. hu , k. xu , b. yin and x. dong .structural analysis of network traffic matrix via relaxed principal component pursuit .computer networks , vol .7 , pp . 2049 - 2067 , 2012 .h. hajji . baselining network traffic and online faults detection .ieee icc , 2003 .b. rubinstein , b. nelson , l. huang , a. joseph , s. lau , s. rao , n. taft and j. tygar .antidote : understanding and defending against poisoning of anomaly detectors .acm imc , 2009 .e. candes , x. li , y. ma , and j. wright .robust principal component analysis ?journal of the acm . vol .3 , pp . 1 - 37 , 2011 .z. zhou , x. li , j. wright , e. candes , and y. ma .stable principal component pursuit .ieee isit , 2010 .p. kokoszka and t. mikosch .the periodogram at the fourier frequencies .stochastic processes and their applications , vol .49 - 79 , 2000 .z. lin , a. ganesh , j. wright , l. wu , m. chen and y. ma .fast convex optimization algorithms for exact recovery of a corrupted low - rank matrix .ieee camsap , 2009 .a. beck and m. teboulle .a fast iterative shrinkage - thresholding algorithm for linear inverse problems .siam journal on imaging sciences , vol .183 - 202 , 2009 .p. l. combettes and j .- c .proximal splitting methods in signal processing .fixed - point algorithms for inverse problems in science and engineering , pp .185 - 212 .springer , new york , 2011 .proof : for any two points and in , we have (a_{1}-a_{2 } ) + ( e_{1}-e_{2 } ) + ( n_{1}-n_{2})\right\|_{f}^{2 } \\ & + 2\left\|(a_{1}-a_{2 } ) + ( e_{1}-e_{2 } ) + ( n_{1}-n_{2})\right\|_{f}^{2 } \\\leq & \big { ( } \beta\left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2 } + \|a_{1}-a_{2}\|_{f } + \|e_{1}-e_{2}\|_{f } \\ & \ + \|n_{1}-n_{2}\|_{f } \big{)}^{2 } + \\ & 2 \big{(}\|a_{1}-a_{2}\|_{f } + \|e_{1}-e_{2}\|_{f } + \|n_{1}-n_{2}\|_{f } \big{)}^{2}\\ = & \beta^{2}\left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2 } + \\ & 2\beta\big { ( } \left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2}\|a_{1}-a_{2}\|_{f } + \\ & \quad \ \\left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2}\|e_{1}-e_{2}\|_{f } + \\ & \quad \ \\left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2}\|n_{1}-n_{2}\|_{f } \big { ) } + \\ & 3 \big{(}\|a_{1}-a_{2}\|_{f}^{2 } + \|e_{1}-e_{2}\|_{f}^{2 } + \|n_{1}-n_{2}\|_{f}^{2 } \big { ) } + \\ & 6 \big{(}\|a_{1}-a_{2}\|_{f}\|e_{1}-e_{2}\|_{f } + \|a_{1}-a_{2}\|_{f}\|n_{1}-n_{2}\|_{f}\\ & \ \ \ + \|e_{1}-e_{2}\|_{f}\|n_{1}-n_{2}\|_{f } \big { ) } \\ \leq & ( 3\beta + \beta^{2})\left\|w_{\mathrm{h}}(w_{\mathrm{h}})^{\top}(a_{1}-a_{2})\right\|_{f}^{2 } + \\ & ( 9+\beta)\big{(}\|a_{1}-a_{2}\|_{f}^{2 } + \|e_{1}-e_{2}\|_{f}^{2 } + \|n_{1}-n_{2}\|_{f}^{2 } \big{)}.\\ \end{split}\ ] ] as is a projection operator in , for each , \right\| \leq \left\|a_{1}(p)-a_{2}(p)\right\|.\ ] ] therefore , we have \right\|^{2 } \\\leq & \sum_{p=1}^{p}\left\|a_{1}(p)-a_{2}(p)\right\|^{2 } = \|a_{1}-a_{2}\|_{f}^{2}. \end{split}\ ] ] add ( [ projection_property ] ) to ( ) and let , we have this completes the proof . | in this letter , we develop a novel network - wide traffic analysis methodology , named stable principal component pursuit with time - frequency constraints ( spcp - tfc ) , for extracting the baseline of a traffic matrix . following a refined traffic matrix decomposition model , we present new time - frequency constraints to extend stable principal component pursuit ( spcp ) , and design an efficient numerical algorithm for spcp - tfc . at last , we evaluate the spcp - tfc method by abundant simulations , and show it has superior performance than other traffic baseline methods such as rbl and pca . baseline of traffic matrix , robust pca , time - frequency constraint , numerical algorithm , simulation . |
the mathematical modeling of biological networks has focused on the influence of the network structure on the functional properties of the system .insights provided by these studies have shown , for example , that structural modules and hierarchical organization in the network are often related to compartmentalization of functional processes .this is important since intracellular processes are rarely carried out by individual elements , and often involve the coordinated activity of multiple genes , proteins , and biochemical transformations . because different components may be recruited for different processes , the most fundamental aspect of the dynamics of complex intracellular networks concerns precisely the characterization of the specific parts of the network that are active under given conditions .recent research focused on the modeling of metabolic networks has shown that typical metabolic states tend to recruit a much larger number of reactions than states that maximize growth rate .this counterintuitive property is important in multiple contexts .for example , it provides a partial explanation for the apparent dispensability of a large fraction of genes in single - cell organisms , since the genes associated with reactions that become inactive in growth - maximizing states are expected not to be essential .this also explains why genetic and environmental perturbations that cause growth defect are accompanied by a burst in reaction activity .these bursts can be attributed to the transient activation of otherwise inactive reactions that are recruited by the suboptimal states that follow the perturbation , since such states tend to have a larger number of active reactions .another important implication concerns the possibility of synthetic rescues , where the inactivation of one gene can be compensated by the targeted inactivation of other genes .the inactivation of such rescue genes can thus allow the recovery of lost biological function .this is possible in part because the rescue genes correspond to genes that would be inactive in an optimal state , so that disabling them helps bring the state of the system closer to the desired optimal state .understanding the root causes of the reduced reaction activity in optimal metabolic states is then of significant interest in the characterization and study of cellular metabolism .focusing on steady - state dynamics , here we use flux - balance based analysis and linear programming techniques to establish rigorous results on the number of reactions that can be active in a given metabolic network .we derive conditions for a specific reaction to be inactive in all ( sec .[ sec3 ] ) or active in almost all ( sec .[ sec4 ] ) feasible metabolic states .we also establish bounds for the number of reactions that can be active in states that optimize an arbitrary linear function of reaction fluxes ( sec .[ sec5 ] ) , which are derived based on the duality principle in linear programming , and we study the uniqueness of the optimal solution for typical linear objective functions ( sec . [ sec6 ] ) .finally , we implement numerical simulations in reconstructed _ escherichia coli _ and human metabolic networks , both to compare with the rigorous bounds and to consider nonlinear objective functions ( sec .[ sec7 ] ) . taken together ,our results show that the reduced number of active reactions in the optimal states of a linear objective function , including growth rate , are mainly determined by the presence of irreversible reactions in the network .the irreversibility constraints are shown to play a role also in nonlinear objective functions , such as the aggregated flux and mass flow activity , whose optimal solutions are shown to have a number of active reactions comparable to the corresponding number for linear functions .we consider time - independent metabolic states , which serve as an appropriate representation of the state of single cells at time scales much shorter than the lifetime of the cells , as well as of the average behavior of a large population of cells at arbitrary time scales in time - invariant conditions . under this_ steady - state _ assumption , a cellular metabolic state is a solution of a homogeneous linear equation that accounts for all stoichiometric constraints , where is the stoichiometric matrix and is the vector of metabolic fluxes . the components of include the fluxes of internal and transport reactions as well as exchange fluxes , which model the transport of metabolic species across the system boundary. constraints of the form imposed on the exchange fluxes are used to limit the maximum uptake rates of substrates in the medium .additional constraints of the form arise for the reactions that are irreversible .assuming that the cell s operation is mainly limited by the availability of substrates in the medium , we impose no other constraints on the internal reaction fluxes , except for the atp maintenance flux , which is set to a fixed positive value .these additional constraints can be organized in the form the set of all flux vectors satisfying eqs .( [ eqn : mb ] ) and ( [ eqn : const ] ) defines the _ feasible solution space _ , representing the capability of the metabolic network as a system . because the number of fluxes is larger than the number of metabolic species , system ( [ eqn : mb ] ) is under - determined and is generally high dimensional .our study is formulated in the context of flux balance analysis , which is based on the maximization of a metabolic objective function within the feasible solution space ( the superscript is used to denote transpose ) .this reduces to a linear programming problem where we set if is not bounded from below and if is not bounded from above . for a given objective function, we can numerically determine an optimal flux distribution for this problem .this formulation is also appropriate for the derivation of the rigorous results presented below . in the particular case of growth maximization, the objective vector is taken to be parallel to the biomass flux , which is modeled as an effective reaction that converts metabolic species into biomass .the first question of interest is to determine the conditions under which a reaction will be inactive for any solution of eq .( [ eqn : mb ] ) .let us define the stoichiometric coefficient vector of reaction to be the column of the stoichiometric matrix .we similarly define the stoichiometric coefficient vector of an exchange flux .if the stoichiometric vector of reaction can be written as a linear combination of the stoichiometric vector of reactions / exchange fluxes , we say that is a linear combination of .we use this linear relationship to completely characterize the set of all reactions that are always inactive due to the stoichiometric constraints , regardless of any additionally imposed constraints , such as the availability of substrates in the medium , reaction irreversibility , cell maintenance requirements , and optimum growth condition .[ thm1 ] reaction is inactive for all satisfying if and only if it is not a linear combination of the other reactions and exchange fluxes .we denote the stoichiometric coefficient vectors of reactions and exchange fluxes by .the theorem is equivalent to saying that there exists satisfying both and if and only if is a linear combination of , , . to prove the forward direction in this statement ,suppose that in a state satisfying . by writing out the components of the equation and rearranging , we get since , we can divide this equation by to see that is a linear combination of , with coefficients . to prove the backward direction , suppose that .if we choose so that for and , then for each , we have so satisfies .theorem [ thm1 ] holds true independently of other constraints because the proof does not involve eq .( [ eqn : const ] ) .in particular , the sufficient condition for inactivity applies to any nutrient medium condition and does not depend on the reversibility of the reactions under consideration . in the case of the reconstructed _e. coli _ ( human ) metabolic network considered in this study ( described in sec .[ sec7 ] ) , which includes 922 ( 3328 ) unique internal and transport reactions , a total of 141 ( 475 ) reactions are always inactive in steady states as a result of the condition in theorem [ thm1 ] .the next question of interest concerns the number of reactions that will be active with probability one in typical metabolic states . the stoichiometric constraints define the linear subspace ( the null space of ) , which contains the feasible solution space .however , the set can possibly be smaller than because of the additional constraints arising from environmental and physiochemical properties ( availability of substrates in the medium , reaction irreversibility , and cell maintenance requirements ) . therefore , may have smaller dimension than .if we denote the dimension of by , there exists a unique -dimensional linear submanifold of that contains , which we denote by .we can then use the lebesgue measure naturally defined on to make probabilistic statements , since we can define the probability of a subset as the lebesgue measure of normalized by the lebesgue measure of .in particular , we say that for _ almost all _ if the set has lebesgue measure zero on .an interpretation of this is that with probability one for an organism in a random state under given environmental conditions , which can be used to prove the following theorem .[ thm2 ] if for some , then for almost all .suppose that for some .the set is a linear submanifold of , so we have . if , then we have , implying that we have for any , which violates the assumption .thus , we must have , implying that has zero lebesgue measure on .since , we have , and thus also has lebesgue measure zero .therefore , we have for almost all .theorem [ thm2 ] implies that we can group the reactions and exchange fluxes into two categories : 1 ._ always inactive _ : for all , and 2 ._ almost always active _: for almost all .consequently , the number of active reactions satisfies where is the number of inactive reactions due to the stoichiometric constraints ( characterized by theorem [ thm1 ] ) and is the number of additional reactions in the category 1 above , which are due to the environmental and irreversibility conditions . combining this result with the finding that optimal states have fewer active reactions ( next section ), it follows that a typical state is non - optimal .equation ( [ eq : env ] ) will lead to a different number of active reactions for different nutrient medium conditions [ determined by eq .( [ eqn : const ] ) ] , with the general trend that this number will be larger in richer medium conditions . in the case of the _ e. coli _ ( human ) reconstructed network simulated in glucose minimal medium , as considered here ( see sec . [ sec7 ] ) , the number of inactive internal and transport reactions is 182 ( 1274 ) , of which 158 ( 563 ) are due to environmental limitations and 24 ( 711 ) are due to reaction irreversibility .the latter includes the cascading - induced inactivation of some reactions due to the inactivation of different , irreversible reactions .we now turn to the central part of our study , which concerns the number of reactions that can be active in steady states that optimize a linear function of the metabolic fluxes .the linear programming problem for finding the flux distribution maximizing a linear objective function can be written in the matrix form : where and are defined as follows . if the constraint is , the row of consists of all zeros except for the entry that is , and . if the constraint is , the row of consists of all zeros except for the entry that is , and .a constraint of the type is broken into two separate constraints and represented in and as above .the inequality between vectors is interpreted as inequalities between the corresponding components , so if the rows of are denoted by , the inequality represents the set of constraints , . by defining the feasible solution space the problemcan be compactly expressed as maximizing in .the duality principle expresses that any linear programming problem ( primal problem ) is associated with a complementary linear programming problem ( dual problem ) , and the solutions of the two problems are intimately related .the dual problem associated with problem is where is the dual variable .a consequence of the strong duality theorem is that the primal and dual solutions are related via a well - known optimality condition : is optimal for problem if and only if there exists such that note that each component of can be positive or zero , and we can use this information to find a set of reactions that are forced to be inactive under optimization , as follows . for any given optimal solution , eq .is equivalent to , where is the component of .thus , if for a given , we have , and we say that the constraint is _ binding _ at . in particular , if an irreversible reaction ( ) is associated with a positive dual variable ( ) , then the irreversibility constraint is binding , and the reaction is inactive ( ) at .in fact , we can say much more : we prove the following theorem stating that such a reaction is actually _ required to be inactive for all possible optimal solutions _ for a given objective function .[ thm : optimal ] suppose is a dual solution corresponding to an optimal solution of problem .then , the set of all optimal solutions of problem can be written as and hence every reaction associated with a positive dual component is binding for all optimal solutions in .let be the optimal solution associated with and let denote the right hand side of .any is an optimal solution of problem , since straightforward verification shows that it satisfies ( [ opt1]-[opt3 ] ) with the same dual solution .thus , we have .conversely , suppose that is an optimal solution of problem .then , can be shown to belong to , which we define to be the hyperplane that is orthogonal to and contains , i.e. , this , together with the fact that satisfies and , from , can be used to show that . therefore , any optimal solution must belong to . putting both directions together, we have . as an example , consider the five - reaction network shown in fig .[ fig : example](a ) where the flux is maximized .the problem can be written in the form of problem with 1 & 0 & 0 & \,\,\,\ , 0 & 0\\ -1 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 \end{pmatrix*}\!,\,\,\ , { \mathbf{b}}= \begin{pmatrix*}[r ] 1\\ -1\\ 0\\ 0\\ 0 \end{pmatrix*}\!,\,\text{and } { \mathbf{s}}=\begin{pmatrix*}[r ] 1 & -1 & 0 & -1 & 0\\ 0 & 1 & -1 & 0 & -1 \end{pmatrix*}\!. \nonumber\ ] ] note that the equality constraint is split into two inequality constraints and for convenience and corresponds to the first two rows of and .the optimal solution space consists of the single point and a possible choice of corresponding dual solution is given by for this dual solution , we have ( corresponding to the constraint ) and ( corresponding to the constraint ) , and eq .in theorem [ thm : optimal ] becomes note that the constraint can be omitted since it is satisfied by any in this example .once we solve problem numerically and obtain a _single _ pair of primal and dual solutions ( and ) , we can use the characterization of given in eq . to identify all reactions that are required to be inactive ( or active ) for any optimal solutions . to do thiswe solve the following auxiliary linear optimization problems for each : if the maximum and minimum of are both zero , then the corresponding reaction is required to be inactive for all . if the minimum is positive or maximum is negative , then the reaction is required to be active .otherwise , the reaction may be active or inactive , depending on the choice of an optimal solution .thus , we obtain the numbers and of internal and transport reactions that are required to be active and inactive , respectively , for all .the number of active reactions for any is then bounded as the distribution of within the bounds is singular : the upper bound in eq .is attained for almost all . to see this , we apply theorem [ thm2 ] with replaced by .this is justified since we can obtain from by simply imposing additional equality constraints .therefore , if we set aside the reactions that are required to be inactive ( including and reactions that are inactive for all ) , all the other reactions are active for almost all . consequently , we can also use theorem [ thm : optimal ] to further classify those inactive reactions caused by the optimization as due to two specific mechanisms : 1 . _ irreversibility . _the irreversibility constraint ( ) on a reaction can be binding ( ) , which directly forces the reaction to be inactive for all optimal solutions .such inactive reactions are identified by checking the positivity of dual components ( ) ._ cascading . _all other reactions that are required to be inactive for all are due to a cascade of inactivity triggered by the first mechanism , which propagates over the metabolic network via the stoichiometric constraints .these inactive reactions occur in addition to the irreversibility / cascading - induced reaction inactivation identified in sec .[ sec4 ] for typical ( in fact all ) steady states . in general , a given solution of problem can be associated with multiple dual solutions .the set and the number of positive components in can depend on the choice of a dual solution , and therefore the categorization according to these specific mechanisms is generally not unique . for the example problem of fig .[ fig : example ] , it is clear from that three reactions , , and are required to be inactive in the optimal state .the dual solution given above categorizes under irreversibility , and and under cascading .this reflects the fact that making under the stoichiometric constraint , along with the irreversibility constraints and , forces [ fig .[ fig : example](c ) ] .another possible dual solution is given by which leads to an alternative characterization of the same : categorizing and under irreversibility , and under cascading .one can indeed see graphically in the projection onto -coordinates in fig .[ fig : example](b ) that the maximization of under the stoichiometric constraint ( which follows from and ) forces the irreversible fluxes and to be zero , which in turn forces [ fig .[ fig : example](c ) ] .clearly , one can also characterize the optimal solution space as which corresponds for example to the choice of dual solution given by this leads to the categorization of all three inactive reactions under irreversibility .thus , we can interpret the non - uniqueness of the categorization as the fact that different sets of triggering inactive reactions can create the same cascading effect on the reaction activity .the results above are important both because they can be applied to any linear objective function and because a significant fraction of real metabolic reactions are irreversible . in the case of the _ e. coli _ ( human ) reconstructed network ,a total of 73.4% ( 65.6% ) of all internal and transport reactions are irreversible .moreover , a number of other reactions are effectively irreversible because the irreversibility of different reactions in the same pathway constrains them not to run in one of the two directions ; this leads to a total of 94.3% ( 74.9% ) of the reactions whose fluxes are either necessarily nonnegative or necessarily nonpositive in all steady - state solutions . in the case of growth - maximizing states for the conditions considered in our numerical experiments , out of all 922 ( 3328 ) internal and transport reactions in the reconstructed network , a total of 146 ( 106 ) reactions are inactive due to irreversibility constraints , and a total of 114 ( 293 ) other reactions are inactive due to a cascade of reaction inactivation ; some of the irreversible reactions can be assigned to either the first or the second of these two groups .the bounds provided by eq .( [ naopt - s ] ) depend on the objective function . in the case of growth - maximizing states ,the lower and upper bounds are 273 ( 113 ) and 339 ( 1180 ) , respectively .these numbers should be compared with the number 599 ( 1579 ) of reactions that are active in typical , suboptimal states .this clearly shows that optimal states are necessarily constrained to have a smaller number of active reactions , and that this is due to the presence of irreversible reactions in the network .another problem of interest concerns the uniqueness of the optimal solutions .while a number of necessary and/or sufficient conditions for this uniqueness are known , we are not aware of any probabilistic statements in the literature addressing this issue .since the feasible solution space is convex , its `` corners '' can be mathematically formulated as _extreme points _ , defined as points that can not be written as with , and such that .intuition from the two - dimensional case ( fig .[ fig : extreme ] ) suggests that for a typical choice of the objective vector such that problem has a solution , the solution is unique and located at an extreme point of .we prove here that this is indeed true in general , as long as the objective function is bounded on , and hence an optimal solution exists .[ thm : corner ] suppose that the set of objective vectors has positive lebesgue measure .then , for almost all in , there is a unique solution of problem , and it is located at an extreme point of . for a given ,the function is bounded on , so the solution set of problem consists of either a single point or multiple points .suppose consists of a single point and it is not an extreme point . by definition , it can be written as with , and such that .since is the only solution of problem , and must be suboptimal , and hence we have and .then , and we have a contradiction with the fact that is optimal . therefore ,if consists of a single point , it must be an extreme point of .we are left to show that the set of for which consists of multiple points has lebesgue measure zero . by theorem [ thm : optimal ] , for a given , there exists a set of indices such that , so where the union is taken over all for which contains multiple points .if is in one of the sets in the union in eq . , the set , being the set of all optimal solutions , is orthogonal to .hence , is in , the orthogonal complement of defined as the set of all vectors orthogonal to .therefore , because is convex , it contains multiple points if and only if its dimension is at least one , implying that each in the union in eq .has dimension at most , and hence has zero lebesgue measure in .since there are only a finite number of possible choices for , the right hand side of eq .is a finite union of sets of lebesgue measure zero .therefore , the left hand side also has lebesgue measure zero .note that growth rate is not a typical objective function .because this objective function has nonrandom coefficients and involves only a fraction of all metabolic fluxes , the objective vector * c * is generally perpendicular to a surface limiting the space of feasible solutions .for this reason , the growth - maximizing states are generally not unique . in the case of the _ e. coli _ reconstructed network simulated in glucose minimal medium , our numerical calculations indicate that the growth - maximizing solutions form a space that is -dimensional . in the case of the human reconstructed network ,the corresponding dimension is 494 .two questions follow from the results above . first , given the _ a priori _ surprising finding that metabolic activity as measured by the number of active reactions decreases in optimal states , what happens if we use other measures of metabolic activity such as total reaction flux in the network ?second , given that these results were derived for linear objective functions , to what extent does the observed reduction in the number of active reactions manifest itself in nonlinear objective functions of biological significance ? these two questions are best examined using numerical experiments .both are addressed below by considering the following objective functions : 1 ._ total flux in the network _: defined as , where runs through all internal and transport reactions ( excluding the biomass flux ) , it measures the overall metabolic activity while accounting for the differences in the fluxes of different reactions .this objective function is nonlinear because of the absolute value used to properly measure the flux of the reversible reactions .2 . _ total mass flow in the network _ : defined as , where is the mass of the reactants ( or products ) involved in reaction , it measures the overall metabolic activity weighted by the mass flow of each reaction .the sum is over the same reaction set considered in the definition of . for other nonlinear objective functions of biological significance in cellular metabolism ,we refer to ref . .figure [ fig : numerical ] shows the results of our numerical experiments for and on the _ e. coli _ reconstructed network . in both cases , the maximum ( minimum ) of the objective function for a given growth rate decreases ( increases ) as the growth rate increases , and it converges to essentially a single intermediate value for all states that maximize growth rate [ fig .[ fig : numerical](a , b ) ] .the average activity , which we determined by randomly sampling the solution space , is essentially constant ( it decreases very slowly as the growth rate increases ) .the sampling of the solution space was performed using the hit - and - run method , which is an efficient algorithm to sample high - dimensional convex regions .our implementation of this method is as described in our previous study and involves artificial centering .the observed behavior of the total flux activity and total mass flow activity should be contrasted with metabolic activity as measured in terms of the number of active reactions , which is significantly smaller at growth - maximizing states .the average number of active reactions in the -maximizing states across different growth rates is just 302 out of a total of 571 that would be active in typical states ( the latter too is an average over different growth rates , and is smaller than the number 599 anticipated in sec . [ sec5 ] because of additional constraints set to the diverging cycles throughout this section see below ) .a similar result holds true for -maximizing states ( table [ tabl ] ) .therefore , the maximizations of total flux and total mass flow in the network also lead to a reduced ( rather than increased ) number of active reactions compared to typical states [ such as those determined by the hit - and - run method in fig .[ fig : numerical](a , b ) ] .this number varies very little with growth rate and is essentially undistinguishable from the number of active reactions in growth - maximizing states ( see standard deviations in table [ tabl ] ) . while these results concern _e. coli _ , we note that similar trends are observed for the human metabolic network .if the irreversible reactions are made reversible [ fig .[ fig : numerical](c , d ) ] , then the number of active reactions increases . for states that minimize and , the number of active reactions jumps to a large number when of the irreversible reactions is assigned to be just slightly negative , and then decreases as the irreversibility constraints are further relaxed ( table [ tabl ] ) . for states that maximize and , the increase in the number of active reactions is by a factor of nearly ( table [ tabl ] ) .this number is comparable to the number of active reactions in typical suboptimal states of the original network .therefore , like in the case of linear objective functions , the reduced number of active reactions found in states that maximize the total flux or the total mass flow is due to the presence of irreversible reactions in the metabolic network ..number of active reactions in states maximizing or minimizing the total flux , , and the total mass flow , .the relaxation of the irreversibility constraints is implemented by allowing to be negative for all irreversible reactions , as indicated in the leftmost column .each column shows the average and standard deviation calculated over the growth rates considered in fig .[ fig : numerical ] . [ cols="<,^,^,^,^",options="header " , ] [ tabl ]all simulations presented in this paper are based on a reconstructed metabolic network of _e. coli _k-12 , which represents a further curated version of the ijr904 model in which duplicated reactions have been removed , and on the most complete reconstructed human metabolic network , generated by applying the same curation to the _ homo sapiens _ recon 1 model .the _ e. coli _ ( human ) network used consists of 922 ( 3328 ) reactions , 901 ( 1491 ) enzyme- and transport protein - coding genes , 618 ( 2766 ) metabolites , 143 ( 404 ) exchange fluxes , and the biomass flux . for the _ e. coli _network , the simulated medium had limited amount of glucose ( 10 mmol / g dw - h ) and oxygen ( 20 mmol / g dw - h ) , and unlimited amount of sodium , potassium , carbon dioxide , iron ( ii ) , protons , water , ammonia , phosphate , and sulfate ; the flux through the atp maintenance reaction was set to 7.6 mmol / g dw - h . for the human network ,we used a medium with limited amount of glucose ( 1 mmol / g dw - h ) and unlimited amount of oxygen , sodium , potassium , calcium , iron ( ii and iii ) , protons , water , ammonia , chlorine , phosphate , and sulfate ; for the biomass composition , we followed ref . . in our simulations , mmol / g dw - h was used as a flux threshold to define the set of reactions considered active .a few cycles whose flux or mass flow would diverge in the optimization of the corresponding objective function were assigned the minimum feasible flux in the optimization of and the minimum feasible mass flow in the optimization of ( under the constraint of not altering the fluxes of the other reactions ) .these minimum flux values were also adopted as bounds in our hit - and - run sampling .all numerical calculations were implemented using the cobra toolbox and the cplex optimization software .this study was supported by the national science foundation under grant dms-1057128 , the national cancer institute under grant 1u54ca143869 - 01 , and a sloan research fellowship to a.e.m .fong , a.r .joyce and b. .palsson , _ parallel adaptive evolution cultures of _ escherichia coli _ lead to convergent growth phenotypes with different gene expression states _ , genome . res . *15 * ( 2005 ) , 13651372 .fong , a. nanchen , b. .palsson and u. sauer , _ latent pathway activation and increased pathway capacity enable _ escherichia coli _ adaptation to loss of key metabolic enzymes _ , j. biol .* 281 * ( 2006 ) , 80248033 .t. shlomi , t. benyamini , e. gottlieb , r. sharan and e. ruppin , _ genome - scale metabolic modeling elucidates the role of proliferative adaptation in causing the warburg effect _ , plos comput . biol .* 7 * ( 2011 ) , e1002018 . | the metabolic network of a living cell involves several hundreds or thousands of interconnected biochemical reactions . previous research has shown that under realistic conditions only a fraction of these reactions is concurrently active in any given cell . this is partially determined by nutrient availability , but is also strongly dependent on the metabolic function and network structure . here , we establish rigorous bounds showing that the fraction of active reactions is smaller ( rather than larger ) in metabolic networks evolved or engineered to optimize a specific metabolic task , and we show that this is largely determined by the presence of thermodynamically irreversible reactions in the network . we also show that the inactivation of a certain number of reactions determined by irreversibility can generate a cascade of secondary reaction inactivations that propagates through the network . the mathematical results are complemented with numerical simulations of the metabolic networks of the bacterium _ escherichia coli _ and of human cells , which show , counterintuitively , that even the maximization of the total reaction flux in the network leads to a reduced number of active reactions . joo sang lee takashi nishikawa adilson e. motter |
numerical simulation of the nonlinear evolution of collisionless dark matter , usually by following a set of point masses as they move under their mutual gravitational influence , has , over the last few decades , played an important role in increasing our understanding of cosmological structure formation .rather than attempt to make a summary of the extensive literature available on this subject here , we direct the reader to the recent reviews by and . a search of the internet will reveal a diverse set of n - body codes now available to be downloaded by those interested in carrying out cosmological simulations .one of the first to be developed is the direct code of .the operation count for implementing any such particle particle code scales as , where is the number of particles .one can reach larger and thus higher mass resolution ( but spatial resolution limited by the cell size on a grid ) in the same amount of cpu time using the pm ( particle mesh ) method , for example the code by .the operation count here scales as , with a small prefactor ; pm is implemented on regular grids , where efficient fast fourier transform ( fft ) algorithms are available . to attain higher , subgrid resolution one can add to a pm code particle particle interactions and refined subgrids , such as in the hydra code .a gridless approach is used in tree codes , for example the zeno package of . the operation count in the barnes hut tree algorithm also scales as but the prefactor is much larger by a factor of 1050 than for a pm code .the tree code of has been made parallel .a tree code with individual particle time steps called gadget has recently been released in both serial and distributed memory parallel versions .one promising new development is the use of multigrid methods involving mesh refinements of arbitrary shape , such as in mlapm .another cell - based approach has recently been developed which shows improved performance over standard tree codes and is included as part of the nemo package . fora given choice of algorithm , a related problem is making it run efficiently on parallel computers .gravity is long - range , so to compute the force on a given particle , one needs some information from every other particle , and communication is thus required if particles are distributed among many processors . presented a new algorithm , called tree particle mesh , based on using domain decomposition in a manner designed to overcome this difficulty .tpm uses the efficient pm method for long - range forces and a tree code for sub - grid resolution .isolated , overdense regions are each treated with a separate tree , thus ensuring coarse grained parallelism . , hereafter box , made several improvements in the code of . in this paperwe present further refinements which improve the accuracy and efficiency of the algorithm . these include the following : allowing individual particle time steps within trees , an improved treatment of tidal forces , new criteria for higher force resolution and for the choice of time step , and parallel treatment of large trees . in [ sec : overview ] we present an overview of latest version of tpm , including time stepping ; [ sec : implementation ] discusses implementation of domain decomposition and the particle time step criterion , with an overview of how the code works in practice .the accuracy of the code is explored in [ sec : perfandacc ] by comparing results with two other algorithms : p m and the gadget tree code .concluding remarks and details of the public release of the tpm code are given in [ sec : discussion ] . discuss the distinction between verifying a simulation ( that is , being sure that the equations are being solved correctly ) and validating it ( having confidence that the equations and their solution actually resemble a real world problem ) .obviously , in this paper only the former is done .various aspects of the validity of cosmological simulations are addressed in , , , , , , , and references therein .when using an n - body code , the simulator will need to be cognizant of these issues .tpm begins with a standard pm code ( in fact , if there are no particles selected to be in trees , then tpm defaults to a pm code ) ; this portion is similar to the pm code described by .the density on a regular grid is found by interpolating particle positions using the cloud in cell ( cic ) method , and poisson s equation is solved on this grid using fast fourier transforms . as noted in the introduction ,the fft technique is highly efficient and scales as . in order to obtain subgrid resolution a tree codeone possible manner of doing this is to compute shorter range forces for every particle with a tree code , in a manner analogous to the p m algorithm .instead , tpm uses domain decomposition , taking advantage of the fact that potentials due to matter outside and inside a given domain can be added linearly , and it uses different solvers to find these two potentials .this section begins with a very broad description of the tpm algorithm , and then considers various aspects in more detail .a basic outline of tpm can be summarized as follows : 1 .identify tree regions and the particles in each such region ; particles outside of tree regions are evolved with pm only .push pm particles to midstep 3 .find the tidal potential in each tree region the potential due to all mass outside of that region .4 . integrate each tree region forward to the middle of the pm step .5 . find the pm potential and update the acceleration for pm particles .integrate each tree region forward to the end of the pm step .update pm particle velocities and positions to the end of the time step .the motivation behind this algorithm is that the motions of particles within a given tree region can be integrated independently using high temporal and spatial resolution : knowledge of the rest of the simulation is not required , except for the time averaged , externally produced tidal forces .the stepping in time of positions and velocities is accomplished with a standard second order leapfrog integration in comoving coordinates , the cosmological model determining the scale factor : [ eqn : leapfrog ] here , , and the gravitational potential are determined at time .one advantage of the tpm approach is that it allows the use of multiple time steps .tree particles are required to take at least two steps per pm step , but each particle has an individual time step so that finer time resolution can be used if required .particles are arranged in an hierarchy of time bins differing by a factor of two , in the manner of : where is the pm time step and the integer . in a sense tpm operates along the same lines as a tree code , except that the particles in the time step bin are handled differently from the rest .a diagrammatic representation of the time stepping is shown in fig[fig : tstep ] for the case when all tree particles are in the longest time step bin , .beginning at time , the pm particle positions are moved forward to midstep ( eq . [ [ eqn : leapfrog]a ] with ) .the tidal potential is then found , and the tree particles are evolved forward one full step with .the pm potential is updated ( eq .[ [ eqn : leapfrog]b ] ) , and for a second time the tree particles are evolved for one full step , to time .finally , the pm particle velocities and positions are updated to the end of the step ( eq .[ [ eqn : leapfrog]c - d ] ) . to repeat the tpm algorithm in more detail : [ [ section ] ] 1 .+ + identify tree regions and the particles in each region .this is done by identifying pm cells above a given density threshold , described in [ sec : domdec ] .adjoining cells are then grouped into isolated tree regions , as described in box .[ [ section-1 ] ] 2 .+ + push pm particles to midstep ( eq . [ [ eqn : leapfrog]a ] ) .[ [ section-2 ] ] 3 .+ + find the tidal potential for each tree region .first the total potential is computed on the grid in the standard pm manner .then for each of the tree regions , a small portion of the grid containing the pm potential is saved , and the contribution to this potential made by the tree itself is subtracted out , leaving the tidal potential due to all external mass .see box for details .since can only be calculated once per pm step , it will be calculated at midstep .pm particles are thus already in the proper location for this , but tree particles , with positions at the beginning of the pm time step , are not . however , since tree regions are spatially separated and the halo density profiles are evolving on slower timescales than any individual particle s orbital period , an approximate position is sufficiently accurate .( note that since a given tree s own contribution to is exactly subtracted back out , only the effect on other trees needs to be considered ) .thus , an approximation of the tree particles positions at the middle of the pm step is taken to be the position one full particle time step ahead ; since tree particle time steps are half the pm time step or less , this is no more than the pm midstep . with these advanced positions , the potential on the gridis found in the standard pm manner . for each treethe pm potential in a cubical subvolume is saved .this volume is slightly larger than the active cell region ( in order to allow for finding the gradient of the potential through finite differencing ) plus one extra cell on a side in case particles migrate out of the active cell region during the time integration .since the pm time step is limited by a courant condition ( see [ sec : timestep ] ) such that pm particles can not move more than a fraction of a cell per step , one extra cell is sufficient particles near the edge of a tree region are at only a slightly higher overdensity than the densest pm regions , and hence have similar velocities . at the beginning of a tree step the portion of the pm potential due to the tree particles themselves is subtracted out , leaving the tidal potential . with saved in this manner , every time a particle s acceleration is updated , the tidal force is calculated from the grid in the same manner that pm forces are found .this is an improvement over the method used in box .[ [ section-3 ] ] 4 .+ + integrate each tree region forward to the middle of the pm step .this is done with a tree code , adding in the tidal forces .the tree code we use was written by lars hernquist .any other potential solver could be used , but a tree code is well suited for this type of problem , in that it can efficiently handle a wide variety of particle distributions , can include individual particle time steps , and scales as .note that for each tree this portion of the code is self contained ; that is , once the particle data and potential mesh for a given tree have been collected together no further information is required to evolve that tree forward .this makes tpm well suited for parallel processing on distributed memory systems .once the tree data is received by a given processor this step which is the most computationally expensive part of the algorithm can be performed without any further communication . given this coarse grained parallelism ,one can use widely distributed and heterogeneously configured processors , with more capable processors ( or groups of processors ) reserved for the largest trees .[ [ section-4 ] ] 5 .+ + find the density and potential on the pm grid again , and update the acceleration for pm particles ( eq . [ [ eqn : leapfrog]b ] ) .all tree particles are at the pm midstep , the proper location for computing the force on pm particles .[ [ section-5 ] ] 6 .+ + integrate each tree region forward to the end of the pm step .[ [ section-6 ] ] 7. + + update pm particle velocities and positions to the end of the time step ( eqs .[ [ eqn : leapfrog]c - d ] ) .tpm exploits the fact that gravitational instability in an expanding universe creates spatially isolated high density peaks ; for typical cdm models these peaks will contain a significant fraction of the mass ( ) but occupy a tiny fraction of the volume ( ; see for example fig of ) . as described in detail in box , this is accomplished by tagging active mesh cells based on the cell density ( which has already been calculated in order to perform the pm step ) .adjacent cells are linked together , dividing the cells into groups in a friends of friends manner .thus isolated high density regions of space are identified , separated by at least one cell width . in box ,the criterion used to decide if a cell was active was if its density exceeded a global threshold density where is the mean density and is the dispersion of cell densities . while temporally adaptive ( in the sense that the threshold is lower at early times when peaks are rare ) , this criterion has two main drawbacks .first , small peaks in isolated regions or voids will not be picked up , especially when is large , because the total mass in such a halo would not put a pm cell above .secondly , in the case of a region with peak density near , whether or not a given pm cell is above can depend on the offset of the grid with respect to the particles the halo mass may be divided into two or more cells .we have developed an improved criterion for selecting tree regions involving the contrast of a cell with its surroundings ( rather than a single global threshold value ) found by comparing the density smoothed on two scales .for each cell three densities are computed .first , found by boxcar smoothing over a length of 5 cells along each dimension .( another type of smoothing , _ e.g. _ with a gaussian , would be possible , but the boxcar can be done with minimal interprocessor communication ) .second , , the mean density in the eight cell cube of which the cell under consideration is the lower octant ( when using cic , a particle in this cell would also assign mass to the other cells in this cube ) .third , , a measure of the density surrounding this cube , defined as all eight cells used to find are marked as active if the reasons for such a choice of form can be seen by examining the behavior of eq .[ [ eqn : rhocrit ] ] in various limits . at early times in a cosmological simulation and , so .thus with a choice of those regions which are only slightly overdense will be selected . at late times . in void regions , which means ; thus an isolated cell will be chosen if it is a fixed multiple above its surroundings .if on the other hand , then , meaning that at each step there is a given density above which all cells will be chosen ; this limit will increase with time as increases .keep in mind that the distribution of cell densities will be approximately lognormal ( and references therein ) when interpreting the value of . after cell selection ,the domain decomposition proceeds in the manner described in box .cells are linked by friends of friends , yielding regions of space separated by at least one cell length . any particle which contributes some mass to an active cell when finding the pm densityis assigned to the corresponding tree . because the volume of all tree regions is less than one percent of the total volume , the amount of memory allocated to hold the tidal potential data is generally not significant compared to that already needed for the pm mesh . however, one complication of this scheme is that at early times , when is small , the resulting trees can be very long filaments . while not containing a large amount of matter , being not very overdense and nearly one dimensional, the size of a cubical volume enclosing such a tree will sometimes be a significant fraction of the entire simulation volume .this can cause difficulties because of the amount of computer memory required to compute and save the tidal potential in such subvolumes .the value of in eq .[ [ eqn : rhocrit ] ] is thus allowed to increase at early times when the spatial extent of trees tends to become larger , and decrease at later times when trees are more compact .we have settled on the following method , which has worked well in a variety of simulations .space is allocated for a maximum subvolume length one quarter of the pm mesh size .the size of the largest sub - box used is checked at the end of each pm step . at early times ( when ) is increased by 0.5% if this box has a length more than a third of the allocated value .once exceeds 1.6 in a typical simulation , the spatial extent of trees has stopped growing , so in this case is decreased by 0.25% if the largest sub - box has a length less than half the maximum allocated .a minimum value is set for to prevent too many tree particles being chosen ; a choice of will place roughly half of the particles in trees at .an example of this in practice is discussed in [ sec : overv ] .the number of pm grid cells can be more or less than the number of particles , though of course the finer the grid , the greater the accuracy of the pm and tidal forces . generally ,the number of cells should at least equal the number of particles ; we have found that eight grid cells per particle works well .a variety of methods for determining the particle time step have been proposed .this diversity of criteria reflects the fact that the appropriate time step can depend on a number of quite different considerations .these include the local density or the local dynamical time , nearby substructure , the nearest neighbor , and the softening length .in addition , for any single criterion , one can imagine a special circumstance where it breaks down .this suggests the possibility of basing on more than one criterion .thus in the tpm code we have decided to combine a number of possible criteria , drawn from variables which do not require any significant computation to find , in the following manner : [ eqn : deltat ] here is an adjustable dimensionless parameter , is the spline kernel softening length , and is the distance to the nearest neighboring particle .the latter is found as a byproduct of the force calculation ; if at the beginning of a tree step has not been calculated , one can as a proxy find the minimum that is consistent with the particle s current .the time derivative of is approximated by .this criterion for is quite conservative , and is similar in principle to that adopted by . a comparison of the time step returned by eq .[ [ eqn : deltat ] ] and the often used criterion is given in fig[fig : dtcomp ] , which shows how these two criteria compare at for particles in the largest tree of the simulation discussed in [ sec : overv ] and [ sec : codecomp ] .this tree is made up of over particles , and contains two large halos plus a number of smaller satellites and infalling matter . each contour level in fig[fig : dtcomp ] encloses a tenth of the particles , and the remaining tenth are plotted as points .it can be seen that eq .[ [ eqn : deltat ] ] tends to yield a smaller timestep than ; this is the case for 74% of the particles .generally the differences are not large less than a factor of two for 60% of the particles .another trend seen in the figure is that as becomes smaller it is more likely for eq . [[ eqn : deltat ] ] to give an even smaller value , and conversely eq .[ [ eqn : deltat ] ] tends to give a longer time step than as the latter becomes larger . in most cases the value of set by the factor .in other words , the distance to the nearest neighbor is not allowed to change by a large factor , unless this distance is small compared to the softening length . even in caseswhen this factor is larger than , it is still smaller than and ; since the equations being integrated have the form and this limits integration errors .the pm time step is limited by a courant condition , such that no pm particle moves more than one quarter of a cell size per step .also , is not allowed to change by more than 1% per step .initially this latter criterion is the most restrictive , but over time it allows a longer and becomes unimportant .the allowed by the courant condition also tends to increase over time , although more slowly .this is because particles falling into dense regions are placed into trees , and the velocities of the remaining particles are redshifted ( eq . [ [ eqn : leapfrog]c ] ) .the largest time step for any particle , , is kept constant until it can safely be increased by a factor of 2 , and it is then doubled .the time step for tree particles is unchanged , which means they take twice as many steps per pm step as before ; during tree evolution particles will move into a lower time step bin if allowed by eq . [ [ eqn : deltat ] ] . whenever any particle changes time step , its position is updated as described in to preserve second order accuracy .various aspects of a typical tpm run can be demonstrated with a standard cosmological simulation .this test case contains =128 particles in a box =40 on a side .the number of grid points for the pm and domain decomposition portions of the code is eight times the number of particles , or .the initial conditions were generated using the publicly available codes grafic1 and lingers .a spatially flat lcdm model was chosen , with cosmological parameters close to the concordance model of : , and .the particle mass is thus . within the tree portion of the code , the opening parameter in the standard barnes hut algorithm is set to , and the time step parameter =0.3 ( see eq . [ [ eqn : deltat ] ] ) .the cubic spline softening length was chosen to be 3.2 so that the spatial dynamic range .with such a small softening length , halos with fewer than 150 particles are likely to undergo some 2-body relaxation in their cores over a hubble time ( assuming the particle distribution follows an nfw profile with concentration =12 ) .fig[fig : overv ] displays the evolution of several important quantities during this run as a function of expansion parameter .the top panel shows the dispersion of pm cell densities ( where the mean cell density is unity ) , and the second panel shows the value of ; both of these factor in eq . [ [ eqn : rhocrit ] ] , which in turn plays a major part in determining the tree distribution .various aspects of this distribution are shown in the remaining curves .the third panel shows the size of the largest cubical subvolume required ( in units of pm grid cells ) .initially small , this grows extremely rapidly as rises from its initial value of 0.3 to ; in response is increased .when this size is at its greatest ( at ) , the percentages of total volume and mass in trees ( shown in the next two panels ) are still quite small .the trees at this time tend to follow caustics they are only slightly overdense and not very massive , but because of their filamentary nature they can have a large spatial extent .these caustics then fragment and collapse , so even while the total mass in trees and the number of trees ( shown in the third panel from the bottom ) increase the maximum subvolume size decreases .the changes in affect mainly the maximum subvolume size ; the number of trees and volume contained in tree regions both increase at a steady rate up to =1 .the number of tree particles increases monotonically throughout the simulation .after ( by far the bulk of the computational time ) , the characteristics of the simulation change much more slowly .roughly half the particles are in trees ranging from a few to particles but occupying only 0.4% of the simulation volume .the penultimate panel shows the number of particles in the first and third largest trees ; by the growth in mass of these objects has slowed to a fairly constant , low rate .trees are distributed in mass roughly as a power law , with 67% of the trees having fewer than 100 particles and 95% less than 1000 .most tree particles take two or four steps per pm step , but some are taking up to 64 ; there were 1075 pm steps total in this run . as increases ,so does the limiting density above which all cells are treated at full resolution .it is important to keep track of this limit when interpreting the results of a tpm run .the bottom panel shows the highest value of the density among those cells evolved only with pm . by density is 120 times the mean , corresponding to 15 particles inside a cell . by the end of the computation , the densest pm only cell contains 26 particles ; so trying to make statements about objects smaller than this will be complicated by the varying spatial resolution of tpm .note that most objects with will in fact be followed at full resolution substructures inside larger objects because they are in regions of higher density , and small isolated halos because they present a density contrast to their surroundings . to be cautiouswe will limit our analysis to objects 50% larger than this limit , _ i.e. _ 40 particles or .as seen in the previous section , during later epochs which take up most of the computational time needed for a run the mass distribution of trees is generally well fit by a power law ranging from a few particles up to of order 0.01 .the actual mass of the largest tree will depend on the ratio of the largest non - linear scale in the box to the box size ; as this ratio becomes larger so does the mass of the largest tree .furthermore , the amount of computation required for a tree with particles will scale as .thus , there can be a substantial variation in the computational time required between different trees , and evolving the largest tree can comprise a significant fraction of the total computational load .an efficient parallel code must handle this situation well when dividing work up among ncpu processors .load balancing of trees is achieved in three ways .first , trees are sorted by and then divided into `` copses '' of roughly equal amounts of work using the `` greedy '' algorithm .that is , starting with the largest tree , each one in turn is given to the copse which up to that point has been assigned the least amount of total work .usually there is one copse per cpu , but there can be two or more per cpu if required by space constraints .there is a communication step , when all data associated with a copse is sent to one cpu .a cpu then evolves each tree in its local copse in turn , starting with the most massive .additionally , the largest trees can be done in parallel .if a few large trees dominate the work load , then it is impossible for all copses to contain equal amounts of work ideally , one would want ncpu copses , each comprising 1/ncpu of the total amount of work , but this clearly ca nt happen if the largest tree takes a larger fraction just by itself . in this case , the force calculation for these large trees is done in parallel by a small number of cpus .this is currently done quite crudely , with only the tree walk and force calculation actually carried out in parallel ; in theory a fully parallel tree code could be used for every tree , allocating more processors to those copses containing the most work . only a few nodes are usually required to reduce the time spent on the largest tree to the level required for load balancing . as a final means of balancing the load ,when a cpu finishes evolving all the trees in its local copse , it then sends a signal to the other cpus .a cpu with work still left will send an unevolved tree to the idle cpu for it to carry out the evolution .the scaling of the current implementation of tpm with number of processors is shown in fig[fig : scale ] .the test case for these timing runs is a standard lcdm model at redshift =0.16 with =256 particles in a 320 cube .this particular set of runs was carried out on an ia-64 linux cluster at ncsa named `` titan '' .this machine consists of 128 nodes , each with dual intel 800mhz itanium processors , and a myrinet network interconnect . while the total time required depends on processor performance , similar scaling with ncpu has been found on a number of machines with various types of processors and interconnects .the topmost line in fig[fig : scale ] is total time , calculated as the number of seconds wallclock time per pm step multiplied by ncpu ; perfect scaling would be a horizontal line .tpm performs quite well at ncpu=64 the efficiency is still 91% as compared to ncpu=4 . beyond this point scaling begins to degrade , with the efficiency dropping to 75% for ncpu=128 .the reason tpm scales well can be seen in the second line from the top , which shows the total time spent in tree evolution .this part of the code takes up most of the cpu time , but it requires no communication and there are enough trees so that just coarse grained parallelization works reasonably well .the next two curves shown indicate the amount of time in the pm portion of the code and overhead related to trees ( identifying tree regions and particles , etc . ) . these take a small fraction of the total time and scale well since the grid is distributed across all processors .the final curve in fig[fig : scale ] shows the time related to communication and imbalance in the tree part of the code .as ncpu increases it becomes more difficult to divide the tree work evenly and processors spend more time either recruiting work from others or waiting for them to finish .this overhead could likely be reduced by incorporating a fully parallel tree code and also by computing the nonperiodic fft ( required to find the tidal potential see box ) in parallel .a run with a larger number of particles and grid points , and thus a larger number of trees , would show efficient scaling beyond ncpu=128 .as a test of new code we have carried out a simulation of the secondary infall and accretion onto an initially uniform overdensity . this can be compared both to other codes and to the analytic , self - similar solution of . to create the initial conditions , particles were placed on a uniform grid with zero velocity .the eight particles on the grid corners were then removed and placed at the center of the volume with a spacing one half that of the regular grid .thus , this is actually more of a cubic overdensity than spherical , but it quickly collapses and the subsequent infall is independent of the details of the initial state . also , there is a void located half the box distance away from the overdensity , but the evolution is not carried out for a long enough time for this to be significant .the initial condition is integrated from expansion factor to .the values and were used for the constants in eq .[ [ eqn : rhocrit ] ] . sinceonly one halo is forming in the box , remains small , rising to only 0.5 by the end of the run . at the end , there are roughly 500 particles within the turnaround radius .this test can be seen as exploring how well the initial collapse of small objects is followed , which is the beginning stage of halo formation in an hierarchical scenario .the final state of this run is shown in fig[fig : sphod ] .the top panel shows density as a function of radius .filled points are the tpm model ; error bars are the square root of the number of particles in a given radial bin .the inner edge of the innermost bin is twice the softening length .also shown is the same run carried out with a p m code ( as open circles ; details of this code are discussed in [ sec : codecomp ] ) , and the solution of .the agreement is quite good the main limitation of this run is likely the mass resolution . as a measure of the phase space density, the lower panel shows , where is the velocity dispersion of the particles in each radial bin .following , we compare this to the solution for a gas . againthe agreement is quite reasonable .differences between tpm and p m arise from different types of softening ( p m uses plummer softening ; here was set to half the tpm value ) and from time stepping ( for p m , for all particles was set by the minimum ) . as a test of tpm in a less idealized situation, the initial conditions described in [ sec : overv ] were again evolved , but using two other n - body codes .the first is the p m code of in a version made parallel by this code was also used in [ sec : sphodtest ] . as in the tpm run ,a mesh of grid points was used .this code uses plummer softening ; the softening length was set to half the value used in the tpm run .the time variable in this code is , defined by ; all particles have the same time step , set by the minimum .the other n - body code used is the tree code named gadget of . in this code ,periodic boundary conditions are handled with ewald summation , the time variable is the expansion factor , and each particle has an individual time step . following , the conservative tree node opening criterion flagged by -dbmax was used , and the time step was set by .in addition , the maximum allowed time step was set to 1% of the initial expansion factor ; whenever doubled this maximum was also increased by a factor of two , by checkpointing the run and restarting with the new value .the softening length was set to be the same as the tpm run ( allowing for different notational conventions ) . as a first comparison of the codes, the two point correlation function , found by counting the number of particle pairs in bins of separation , was calculated . to compute ,51 logarithmically spaced bins were used , with the minimum pair separation considered being and the maximum one third of the box size .the results at various redshifts are shown for all three runs in fig[fig : cfcmp ] .clearly there is little difference to be seen in the three codes . at larger scalesthis is to be expected ; at smaller scales force approximations , smoothing , and relaxation may become important .the fact that p m uses plummer rather than spline softening explains why it shows a lower at scales less than a few times the softening length .if high values of provide a measure of accuracy , the tpm and gadget are slightly more accurate than p m .the statistics of mass peaks the dark halos surrounding galaxies , clusters , and so on are an important product of n - body codes .one commonly used halo finder is friends of friends , or fof .the cumulative mass function , found with fof using a linking length of 0.2 times the mean interparticle separation , is shown in fig[fig : fofcmp ] . at higher redshiftsthere is little discernible difference between the three codes . at later times, tpm seems to have fewer halos containing particles .it is unclear how to interpret this , because halos with less than 150 particles ( i.e. mass ) are affected by two - body relaxation . found that an adaptive p m and gadget both produced more small fof halos than the adaptive multigrid mlapm and art codes , so in this case tpm may agree more closely with the latter . to investigatefurther a different halo finder was used , namely the bound density maxima ( bdm ) algorithm .this algorithm is significantly different from fof , so the resulting mass function is probing different qualities of the dark matter distribution. density maxima inside spheres of radius 100 were found , and halo centers were then calculated using spheres of radius 40 .the mass for each halo is taken to be that inside a sphere containing the overdensity expected for a virialized halo just collapsed from a spherical top hat ; to calculate this overdensity the fit of was used . in bdm ( unlike fof )particles moving faster than the escape velocity are removed from the halo .the bdm cumulative mass functions for the three n - body codes are shown in fig[fig : mfcmp ] .as with fof , the agreement between codes is quite good . in this case it appears to be tpm which produces more small halos , but again this effect is for low mass halos likely affected by relaxation . using the bdm halos with more than 40 particles , the halo halo correlation function is shown in fig[fig : cfhal ] .the number of pairs of halos in 21 logarithmically spaced radial bins , with the smallest separation being 200 and the largest one third of the box size , were tallied and compared with the expectation for a random distribution .all three codes give the same result . from the results presented so far in this sectionit is clear that tpm yields a quite similar evolved matter distribution as compared to other codes .finally we turn to the internal properties of halos .one probe of the mass distribution in a halo is the circular velocity . for each simulationwe divide halos up into six mass bins . to avoid relaxation effects ,the lowest mass considered is .the width of each bin is a factor of , so the largest mass bin contains two halos with mass above . for each set of halosthe average circular velocity as a function of radius was calculated ; these average velocity curves are shown in fig[fig : vcirc ] .it can be seen that these curves are quite similar for all three codes .two main trends are noticeable , the main one being that p m shows lower circular velocities at small radii .this is likely due to the use of plummer softening and less strict time step criterion in the p m run , both of which would lead to lower densities in the innermost parts of halos .the other main difference is that for the lowest masses it appears that tpm yields higher circular velocities than the other two codes . instead of averaging over a number of halos of similar mass , it is also possible to compare halos on a one - by - one basis .to find the appropriate pairs of halos , the list of halos with more than 150 particles found by bdm was sorted by mass , and for each target gadget or p halo a tpm halo was selected as a match if its mass was within 25% and its position within 1 of the target halo ; if more than one halo passed this test then the nearest in position was selected .the selected tpm halo was removed from further consideration , and the process repeated for next target halo . in this manner a match was found for 588 out of 609 gadget halos and 573 out of 582 p m halos . for various measured halo properties , the percentage difference of the tpm halo from the gadget or p m halo was calculated . since the dispersion of these differences increases as less massive halos are considered , halo pairsare split into two groups , with the higher mass group containing all target halos with 394 or more particles ( i.e. mass ) .results are shown in table [ tab : hdiff ] , which for each property gives the mean percentage difference and one standard deviation , as well as the first quartile , median percentage difference , and third quartile .the first property shown is halo mass .the difference here is constrained to be less than 25% , but in most cases the tpm value is within 10% of the other code s .it appears that p m yields slightly lower masses than the other two codes , but the latter two agree quite well .the agreement in masses shows that the simple scheme used to find matching halo pairs works well , as does the fact that the difference in position is less than 165 in 95% of the cases .this is reinforced by the agreement between the halo center of mass velocities shown in table [ tab : hdiff ] ; the velocity vectors are closely aligned , differing by less than in 95% of the pairs .thus only a few percent of the halo pairs are mismatches ; since these pairs still have similar masses and are likely in similar environments , they are kept in the comparisons . in terms of the 3-d root mean square velocity and maximum circular velocity , the more massive halos are very similar in all three codes , with no offset between codes . however , at the lower mass end tpm gives values that tend to be systematically higher by a few percentthis can also be seen for the average circular velocity in the bottom curve of fig[fig : vcirc ] ; this is curve is the average for halos below roughly .the difference between tpm and the other two codes is most pronounced near halo centers .the last property shown in table [ tab : hdiff ] is a comparison of central density , which is computed by measuring the amount of mass within =12.8 of the halo center . as would be expected from fig[fig : vcirc ] , tpm yields higher central densities than gadget , with p m giving lower than either of these .one possible source of the differences seen in tpm halos as compared to a pure tree code could simply be the choice of time step .the p m run took steps and the gadget run ; tpm on the other hand took , or 60% more than gadget .of course this comparison is not entirely straightforward because different particles determine the smallest time step at different times .part of the difference between gadget and tpm may be due to the fact that tpm bins time steps by factors of two , whereas gadget allows time steps to vary more gradually .a second tpm run was carried out to separate other code differences from the time step criterion .this run was identical to the first except that the time step was set not by eq .[ [ eqn : deltat ] ] , but rather by ( similar to the gadget and p m codes ) ; as a result it took fewer steps than any of the other runs .there is no significant difference in the mass functions and halo halo correlation functions between this run and the original tpm run .the particle - particle correlation function is different , however .this can be seen in fig[fig : xicmp ] , which shows the ratio of the original tpm run s to that of the new run , at redshift =0 .the new tpm run has a lower for , roughly 5 - 10% lower than the original tpm and similar to the p m run .this indicates the internal structure of halos has been affected by the longer time steps , which is confirmed by repeating the comparison of individual halos .the difference between halos in the new tpm run and the gadget run are given in table [ tab : hdiff ] .for the higher mass halos the new run , unlike the original one , tends to give lower maximum circular velocities and central densities than the tree code .this indicates that the longer time step is in fact leading to inaccuracies .the differences between tpm and the tree code for low mass halos seen in the original run persist in the new run , though they are not quite as pronounced .another point of comparison between codes is efficiency with which they use computing resources .fig[fig : times ] shows the wall - clock time consumed by each of the codes , as a function of expansion parameter , in carrying out the test simulation .all the runs used four 300 mhz ip27 processors of an sgi origin 2000 ; the codes were compiled with the sgi compilers and mpi library .all three codes used roughly the same amount of memory ( tpm requires at least 20 reals per particle plus 3 reals per mesh point , divided evenly among processors ) . at the earliest times both p m and tpm spend most of their time in the pm fft , and so they behave similarly . however , as objects begin to collapse p m begins to consume more time computing particle - particle interactions .the fact that the accelerations of all particles are updated every step also makes p m use more time than do the two multiple time step codes .the tree code takes more time when the particle distribution is nearly homogeneous , demonstrating that a pm code ( which is what the gridded codes basically are in this situation ) is very efficient .however , the tree code timing is roughly independent of the particle distribution , and once inhomogeneity develops it does not require more time per update , whereas the other codes do .tpm does well compared to the tree code for a couple of reasons .by =1 roughly half the particles in the tpm run are still being handled solely by pm .also , imagine breaking particles up into trees each with particles ; the time required to solve all the trees will scale as , lower by a logarithmic factor than using the same tree solver on all the particles .overall , despite taking more timesteps , in this test case tpm required less cpu time than the other codes , by a factor of relative to the tree code and relative to p m .this paper has presented a parallel implementation of the tpm algorithm .several improvements over the implementation of box have been made .particles in tree regions each have an individual time step , half of the pm time step or less , making the tree integration more efficient .the treatment of tidal forces on trees is also improved by saving the tidal potential on a grid and evaluating the force as a function of particle position at each smaller particle time step ; thus a greater ratio of tree to pm time steps is allowed . a new , more stringent , time step criterion has been implemented .a new criterion for locating regions for treatment with higher resolution is given ; by finding cells with higher density than their surroundings , small halos in lower density regions are located and followed at full resolution .these changes significantly increase the speed and accuracy of tpm : a computation of the test case discussed in [ sec : codecomp ] using the code described in box required more cpu time ( by a factor of over three ) than the current version , but was only accurate for halos with more than 315 particles .comparisons with other widely used algorithms were made for a typical cosmological structure formation simulation .these show excellent agreement .the particle - particle correlation functions , the halo mass functions , and the halo - halo correlation functions from tpm agree quite well with those from p m and tree codes .the internal properties of halos also agree ; the main difference being that , for lower mass halos , tpm yields higher and maximum circular velocities ( by a few percent over a tree code ) .tpm halos also show higher central densities than those of the other two codes , though the mean difference is smaller than the dispersion .this difference disappeared , at least among the more massive halos , when the time step criterion of eq .[ [ eqn : deltat ] ] was replaced with one similar to that employed by the other two codes .thus we conclude that a choice of a relatively conservative time step criterion contributed to a slightly improved accuracy .tpm yielded results of similar accuracy to the other codes used here while using significantly less computational time ( about a quarter of that needed by a tree code and an eighth of that needed by p m ) .it also scales well on distributed memory parallel machines , such as networked pcs , because this parallelism is built in as part of the design of tpm .however , in committing to an algorithm which accentuates the coarse - grained parallelism inherent in a typical cosmological simulation , a large degree of flexibility is sacrificed .a basic presumption of tpm is that the largest nonlinear structure inside the simulation box is a small fraction of the total mass and volume . to simulate a situation where this is not the case ( e.g. two colliding galaxies ) another code would be preferred .the tpm source code can be obtained at http://astro.princeton.edu/bode/tpm or by contacting the authors .the code is written in fortran 77 and uses mpi for message passing ; thus it is very portable and can be used on clustered pc s or other distributed memory systems .many thanks are due to lars hernquist for generously supplying a copy of his tree code ; also edmund bertschinger for use of his p m code , joe henawi for help with fof , and scott tremaine for useful discussions .this research was supported by the national computational science alliance under nsf cooperative agreement asc97 - 40300 , paci subaward 766 .computer time was provided by ncsa and the pittsburgh supercomputing center .aarseth , s. 1999 , , 111 , 1333 baertschiger , t. , joyce , m. & labini , f.s .2002 , , 581 , 63 ( astro - ph/0203087 ) bagla , j.s . 1999 ,preprint ( astro - ph/9911025 ) barnes , j.e .1998 , galaxies : interactions and induced star formation , r.c .kennicutt jr ., f. schweizer and j.e .barnes , berlin : springer , 275 barnes , j. & hut , p. 1986 , nature , 324 , 446 becciani , u. & antonuccio - delogu , v. 2001 , comp .comm . , 136 , 54 bertschinger , e. 1985 , , 58 , 39 bertschinger , e. 1998 , , 36 , 599 bertschinger , e. 2001 , , 137 , 1 bryan , g.l . & norman , m.l .1998 , , 495 , 80 bode , p. , ostriker , j.p ., & xu , g. 2000 , , 128 , 561 ( box ) calder , a.c .2002 , , in press couchman , h.m.p . ,thomas , p.a . & pearce , f.r .1995 , , 452 , 797 davis , m. , efstathiou g. , frenk , c. & white , s.d.m .1985 , , 292 , 371 dehnen , w. 2000 , , 536 , l39 dehnen , w. 2002 , j. comp ., 179 , 27 dorband , e.n . , hemsendorf , m. & merritt , d. 2002 , j. comp ., in press ( astro - ph/0112092 ) efstathiou g. , davis , m. , frenk , c. & white , s. 1985 , , 57 , 241 ferrell , r. & bertschinger , e. 1994 , int .c , 5 , 933 frederic , j.j .1997 , ph.d .thesis , mit hamana , t. , yoshida , n. & suto , y. 2002 , , 568 , 455 hernquist , l. 1987 , , 64 , 715 hernquist , l. 1990 , j. comp .phys . , 87 , 137 hernquist , l. & katz , n. 1989 , , 70 , 419 hockney , r.w . & eastwood , j.w .1981 , computer simulation using particles , new york : mcgraw hill kayo , i. , taruya , a. & suto , y. 2001 , , 561 , 22 jing , y.p .& suto , y. 2002 , , 574 , in press klypin , a. 2000 , preprint ( astro - ph/0005502 ) klypin , a. , gottlber , s. , kravtsov , a.v . & khokhlov , a.m. 1999 , , 516 , 530 klypin , a. , & holtzman , j. 1997 , preprint ( astro - ph/9712217 ) knebe , a. 2002 , , submitted ( astro - ph/0201490 ) knebe , a. , green , a. & binney , j. 2001 , , 325 , 845 knebe , a. , kravtsov , a.v ., gottlber , s. & klypin , a.a .2000 , , 317 , 630 kravtsov , a.v ., klypin , a.a . & khokhlov a.m. 1997 , , 111 , 73 lia , c. & carraro , g. 2001 , , 276 , 1049 ma , c .- p . , & bertschinger , e. 1995 , , 455 , 7 miocchi , p. & capuzzo - dolcetta , r. 2002 , , 382 , 758 mo , h.j . & white , s.d.m .2002 , , 336 , 112 ( astro - ph/0202393 ) ostriker , j.p . & steinhardt , p.j .1995 , nature , 377 , 600 power , c. , navarro , j.f . ,jenkins , a. , frenk , c.s . , white , s.d.m ., springel , v. stadel , j. & quinn , t. 2002 , , submitted ( astro - ph/0201544 ) ricker , p.m. , dodelson , s. & lamb , d.q .2000 , , 536 , 122 springel , v. , yoshida , n. & white , s.d.m .2001 , new astronomy , 6 , 79 stadel , j.g .2002 , ph.d .thesis , university of washington , seattle taylor , j.e . &navarro , j.f .2001 , , 563 , 483 teuben , p. 1995, astronomical data analysis software and systems iv , r.a .shaw , h.e .payne & j.j.e .hayes , san francisco : astronomical society of the pacific , 398 van kampen , e. 2000 , , submitted ( astro - ph/0002027 ) viturro , h.r .& carpintero , d.d .2000 , a&as , 142 , 157 xu , g. 1995 , , 98 , 355 white , m. 2002 , , in press yahagi , h. , mori , m. & yoshii , y. 1999 , , 124 , 1 yahagi , h. & yoshii , y. 2001 , , 558 , 463 lccccccccccc & & & + & mean & s.d . & & & & & mean & s.d . & & & + + & -0.03 & 8.21 & -5.07 & 0.08 & 4.80 & & -0.37 & 5.33 & -2.33 & -0.12 & 2.08 + & 0.06 & 9.38 & -1.08 & 0.52 & 1.74 & & 0.49 & 4.28 & -0.54 & 0.59 & 1.74 + & 1.59 & 7.94 & -2.20 & 1.99 & 5.97 & & -0.05 & 4.38 & -2.12 & 0.18 & 2.23 + & 2.69 & 6.98 & -1.14 & 2.89 & 6.68 & & 0.35 & 3.85 & -1.48 & 0.25 & 2.01 + & 16.2 & 38.1 & -2.22 & 8.33 & 26.0 & & 4.03 & 22.1 & -8.82 & 2.45 & 13.7 + + & 1.84 & 8.65 & -3.40 & 1.75 & 6.74 & & 0.38 & 5.14 & -1.51 & 0.39 & 2.37 + & 0.22 & 7.44 & -1.19 & -0.10 & 1.20 & & -0.38 & 4.26 & -0.93 & -0.07 & 0.84 + & 3.84 & 7.13 & -0.80 & 3.59 & 8.94 & & 0.23 & 3.82 & -1.83 & -0.03 & 2.16 + & 6.20 & 8.93 & 1.02 & 5.11 & 10.7 & & 0.32 & 4.06 & -1.94 & -0.24 & 2.00 + & 29.4 & 39.2 & 3.45 & 20.0 & 47.2 & & 13.7 & 35.0 & -2.46 & 6.82 & 19.0 + + & -0.25 & 8.11 & -4.70 & -0.08 & 4.26 & & -0.63 & 5.82 & -2.73 & -0.33 & 2.18 + & -0.01 & 8.19 & -0.97 & 0.34 & 1.65 & & 0.76 & 7.10 & -0.68 & 0.45 & 1.76 + & 1.55 & 7.83 & -3.07 & 1.99 & 5.36 & & -1.00 & 3.86 & -2.65 & -0.62 & 1.28 + & 1.74 & 6.80 & -2.61 & 1.50 & 5.69 & & -0.77 & 3.85 & -2.44 & -0.49 & 1.03 + & 12.7 & 30.5 & -6.24 & 6.89 & 26.7 & & -0.75 & 24.3 & -12.9 & -4.16 & 6.49 + | an improved implementation of an n - body code for simulating collisionless cosmological dynamics is presented . tpm ( tree particle mesh ) combines the pm method on large scales with a tree code to handle particle - particle interactions at small separations . after the global pm forces are calculated , spatially distinct regions above a given density contrast are located ; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution . the new implementation includes individual particle time steps within trees , an improved treatment of tidal forces on trees , new criteria for higher force resolution and choice of time step , and parallel treatment of large trees . tpm is compared to p m and a tree code ( gadget ) and is found to give equivalent results in significantly less time . the implementation is highly portable ( requiring a fortran compiler and mpi ) and efficient on parallel machines . the source code can be found at http://astro.princeton.edu/bode/tpm . |
given an undirected graph with vertex set and edge set , a k - coloring of is a function that assigns to each vertex a color , where . a k - coloring is considered legal if each pair of vertices connected by an edge receive different colors . the minimum sum coloring problem ( mscp ) is to find a legal k - coloring such that the total sum of colors over all the vertices is minimized .the minimum value of this sum is called the chromatic sum of and denoted by .the number of colors related to the chromatic sum is called the strength of the graph and denoted by .the mscp is np - hard for general graphs and provides applications mainly including vlsi design , scheduling and resource allocation .given the theoretical and practical significance of the mscp , effective approximation algorithms and polynomial algorithms have been presented for some special cases of graphs , such as trees , interval graphs and bipartite graphs . for the purpose of practical solving of the general mscp ,a variety of heuristics have been proposed in recent years , comprising a parallel genetic algorithm , a greedy algorithm , a tabu search algorithm , a hybrid local search algorithm , an independent set extraction based algorithm and a local search algorithm . on the other hand ,binary quadratic programming ( bqp ) has emerged during the past decade as a unified model for a wide range of combinatorial optimization problems , such as set packing , set partitioning , generalized independent set , maximum edge weight clique and maxcut . a review concerning the additional applications and the reformulation procedures can be found in .this bqp approach has the advantage of directly applying an algorithm designed for bqp to solve other classes of problems rather than resorting to a specialized solution method .moreover , this approach proves to be competitive or even better than the special algorithms proposed for several problems . in this paper , we investigate for the first time the application of this bqp approach to solve the mscp problem .we propose a binary quadratic formulation for the mscp which is solved by our path relinking algorithm previously designed for the general bqp . to assess the performance of the proposed approach , we present computational results on a set of 23 benchmark instances from the literature and contrast these results with those of several reference algorithms specifically dedicated to the mscp .the rest of this paper is organized as follows .section [ sec_trans ] illustrates how to transform the mscp into the bqp formulation .section [ sec_pra ] presents an overview of our path relinking algorithm for the general bqp .section [ sec_results ] is dedicated to computational results and comparisons with other reference algorithms in the literature .the paper is concluded in section [ sec_conclusion ] .given an undirected graph with vertex set ( ) and edge set .let be 1 if vertex is assigned color , and 0 otherwise .the linear programming model for the mscp can be formulated as follows : the linear model of the mscp can be recast into the form of the bqp according to the following steps : for the constraints , we represent these linear equations by a matrix and incorporate the following penalty transformation : \ \ \ \ \ \ \ \ \ \ \ \ \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = p[x^{t}(a^{t}a)x - x^{t}(a^{t}b)-(b^{t}a)x]+pb^{t}b \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = xd_{1}x+c \ \ \ \ \ \ \ \end{split}\ ] ] for the constraints , we utilize the quadratic penalty function to replace each inequality in and add them up as follows : where if and 0 otherwise . to construct the nonlinear bqp formulation , we first inverse the minimum objective of the mscp to be in accordance with the general bqp model under a maximum objective , which becomes the first component of . then we add the penalty function into such that if all the linear equations in are satisfied and otherwise is a penalty term with large negative values .in the same way , we add the penalty function into .hence , the resulting bqp formulation for the mscp can be expressed as follows : once the optimal objective value for this bqp formulation is obtained , the minimum sum coloring value can be readily obtained by taking its inverse value .further , a penalty scalar is considered to be suitable as long as its absolute value is larger than half of the maximum color ( ) . consider that penalty functions should be negative under the case of a maximal objective , we select for the benchmark instances experimented in this paper . the optimized solution obtained by solving the nonlinear bqp formulation indicates that such selection ensures both and equal to 0 .in other words , each variable with the assignment of 1 in the optimized solution forms a feasible k - coloring in which vertex gets the color . to illustrate the transformation from the mscp to the bqp formulation , we consider the following graph with and expect to find a legal -coloring with .+ its linear formulation according to equation ( [ lp ] ) is : choosing the scalar penalty , we obtain the following bqp model : where is written as : solving this bqp model yields ( all other variables equal zero ) and the optimal objective function value .reversing this objective function value leads to the optimum ( the minimum sum coloring ) of 6 for this graph .path relinking is a general search strategy closely associated with tabu search and its underlying ideas share a significant intersection with the tabu search perspective , with applications in a variety of contexts where it has proved to be very effective in solving difficult problems .our previous path relinking algorithm ( pr ) is directly utilized to solve the mscp in its nonlinear bqp form as expressed in eq .the proposed path relinking algorithm is mainly composed of the following components : _ refset initialization _ , _ solution improvement _ , _ relinking _ , _ solution selection _ , _ refset updating _ and _ refset rebuilding_. below we overview the general scheme of our path relinking algorithm .more details can be found in .firstly , the _ refset initialization_ method is used to create an initial set of elite solutions _ refset _ where each solution is obtained by applying a tabu search procedure to a randomly generated solution .secondly , for each pair of solutions ( , ) in _ refset _ we undertake the following operations : ( 1 ) apply a _ relinking _ method to generate two paths ( from to and from to ) by exploring trajectories ( strictly confined to the neighborhood space ) that connect high - quality solutions .the first and last solutions of the path are respectively called the initiating and guiding solutions . at each step of building the path from the initiating solution to the guiding solution , we randomly select a variable from the set of variables for which and ) have different values ; ( 2 ) apply a _ solution selection _ method to select one solution from those generated on the path by reference both to its quality and to the hamming distance of this solution to the initiating solution and the guiding solution . this selected solution is then submitted to the tabu search based _ improvement method _ ; ( 3 ) apply a _ refset updating _ method to decide if the newly improved solution is inserted in _refset _ to replace the worst solution .the above procedure is called a round of our path relinking procedure . finally , once a round of path relinking procedure is completed , a _ refset rebuilding_ method is launched to rebuild _ refset _ that is used by the next round of the path relinking procedure .the path relinking algorithm terminates as soon as its stop condition ( e.g. a fixed computing time ) is satisfied .to assess this bqp approach for the mscp , we carry out experiments on a set of 23 graphs , which are the most used benchmark instances in the literature .our experiments are conducted on a pc with pentium 2.83ghz cpu and 8 gb ram . the time limit for a single run of our path relinking algorithmis set as follows : 1 hour for the first 16 instances in table [ table_results ] ; 10 hours for dsjc125.1 , dsjc125.5 , dsjc125.9 , dsjc250.1 and dsjc250.5 ; 20 hours for dsjc250.9 and dsjc500.1 .given the stochastic nature of our pr algorithm , each problem instance is independently solved 20 times .the tabu tenure and the improvement cutoff are two parameters in the tabu search based improvement method a key component of the pr algorithm . according to preliminary experiments , we set ( where denotes the number of variables in the resulting bqp model and rand(50 ) receives a random integer ranging from 1 to 50 ) .in addition , we set for the improvement of the initial solutions in and for the improvement of the solutions on the path , respectively .table [ table_results ] presents the computational statistics of the bqp model for the mscp .columns 1 to 3 give the instance names _ instances _ along with the vertex number and edge number of the graphs . columns 4 and 5 show the number of colors to be used and the number of variables in the bqp formulation .column 6 summarizes the best known results _bkr _ from the previous literature .the columns under the heading of bqp - pr report our results of the bqp model solved by the pr algorithm : the best objective values , the average objective values , the standard deviation , the average time ( in seconds ) to reach the best objective value over 20 runs , and the average time ( in seconds ) consumed to reach the best objective value obtained in each run .the last row shows the average performance over a total of 23 tested instances in terms of the deviations of the best and average solution values from the _bkr_. notice that the results marked in bold in the column indicate that bqp - pr reaches the _ bkr _ on these instances .we also applied cplex v12.2 to the linear model ( [ lp ] ) ( section [ linear model for the mscp ] ) to solve these mscp instances .cplex was successful in finding an optimal solution for 15 instances ( marked with an asterisk ) , but terminated abnormally for the remaining 8 instances due to excess requirements of memory ..computational statistics of the bqp - pr approach for the mscp [ cols= " < , < , < , < , < , < , < , < , < , < , < , < , < " , ] as we can observe in table [ table_cmp ] , the proposed bqp - pr approach outperforms hls , mrlf , pga and ts in terms of the best solution values .specifically , bqp - pr finds better solutions than hls , mrlf , pga and ts for 3 , 14 , 8 and 7 instances , respectively .finally , bqp - pr performs less well compared with the most effective mscp heuristics exscol and mds(5 ) . from this experiment, we conclude that our bqp approach combined with the pr algorithm for tacking the mscp constitutes an interesting alternative to specific algorithms tailored to this problem . in this section ,we discuss some limitations of the proposed bqp - pr approach for solving the mscp .first , the proposed method may require considerable computing time to reach its best solutions for large graph instances ( see column in table [ table_results ] ) .this can be partially explained by the fact that the number of the bqp variables ( equaling where is the number of vertices and is the number of colors ) sharply increases with the growth of and .additionally , at present our approach is not able to solve graph instances with bqp variables surpassing the threshold value of 20,000 because of the memory limitation .these obstacles could be overcome by designing more effective data structures used by the bqp algorithms .we have investigated the possibility of solving the np - hard minimum sum coloring problem ( mscp ) via binary quadratic programming ( bqp ) .we have shown how the mscp can be recast into the bqp model and explained the key ideas of the path - relinking algorithm designed for the general bqp .experiments on a set of benchmark instances demonstrate that this general bqp approach is able to reach competitive solutions compared with several special purpose mscp algorithms even though considerable computing time may be required .the work is partially supported by the pays de la loire " region ( france ) within the radapop ( 2009 - 2013 ) and ligero ( 2010 - 2013 ) projects . | in recent years , binary quadratic programming ( bqp ) has been successively applied to solve several combinatorial optimization problems . we consider in this paper a study of using the bqp model to solve the minimum sum coloring problem ( mscp ) . for this purpose , we recast the mscp with a quadratic model which is then solved via a recently proposed path relinking ( pr ) algorithm designed for the general bqp . based on a set of mscp benchmark instances , we investigate the performance of this solution approach compared with existing methods . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore |
the skin effect problem is one of the most important problems in plasma kinetic theory ( see , for example , in refs. ) .the skin effect in plasma is a response of electron gas to external transverse electromagnetic field .the problem also has great practical importance .the solution of the skin effect problem with specular reflection boundary conditions is well known , .the analytic solution of the problem with diffuse reflection boundary conditions has been obtained in the middle of the previous century ( see , for example , ref. ) . the skin effect problem with general specular diffuse reflection boundary conditions is not solved till now .it s well known that specularity coefficient is a very important factor in the kinetic skin effect theory .the limiting cases ( diffuse surface scattering of electrons ) and ( specular surface scattering of electrons ) are only very special cases .actually the specularity coefficient equals neither to zero , nor unit , and takes some intermediate values on .so , for example , in the work it is shown , that the specularity coefficient is equal to in wire . in this connectionthe skin effect problem with specular diffuse boundary conditions has exclusively fundamental significance .its value is great for theories , and for practical applications .so it s obvious that the solution of the skin effect problem with general specular diffuse reflection boundary conditions is a very important task .the method of solution of this problem for degenerate plasma in metal has been developed in ref .this method is based on the use of von neumann series .authors in ref . have demonstrated high efficiency of the method developed for computation of the skin effect characteristics .the goal of this work is generalization of the method developed in ref . in the case of gaseous plasma . by method of decomposition of the solution by eigenfunctions of the corresponding characteristic equation exact solutions of the skin effect problem in metal for diffuse and specular boundary conditionsare received in refs . , . in the last years interest to skin effect problems continues to grow ( see , for example , refs . ) . in particular , in ref . in limiting anomalous skin effect conditions , the oblique electromagnetic wave reflection from the sharp plasma boundary in an assumption of mixed ( specular and diffuse ) electron reflection from the boundary is considered .let s gaseous ( nondegenerate ) plasma occupy a half space .the distribution function of electrons is normalized by electron numerical density ( concentration of electrons ) : where is the electron momentum , is the electron mass , is the electron charge , .we consider electromagnetic wave which propagates in direction orthogonal to the plasma surface .then the external field has only one .the internal field inside plasma has only too , where is the field frequency . to describe the electron distribution function we will use the vlasov boltzmann kinetic equation . the collision integral will be represented in the form of where is the time between two electrons collisions , , is the effective electron collision frequency , is the equilibrium maxwellian distribution function , for weak fields this equation may be linearized : here where , , is the dimensionless electron velocity , is the boltzmann costant , is the plasma temperature . for function we have the following kinetic equation : where in the equation ( 1.2 ) is the dimensionless coordinate , is the dimensionless time , is mean electron free path and is the dimensionless electric field : we will neglect displacement current . then the equation for electric field may be written in the form : where is the electric current , \,d^3v .\eqno{(1.4)}\ ] ] let s extend the electric field and the electric distribution function on the `` negative '' half space in symmetric manner . for the functions and we will then write : we may rewrite the equation ( 1.4 ) with use of dimensional parameter : here is the electron thermal velocity , is the penetration length of external electric field for normal skin effect , is the plasma conductivity . by extension procedure ( 1.5 ) on the half space we may include the surface conditions in the equation for skin effect problem .specular diffuse boundary conditions on the boundaries of positive and negative half spaces may be written in the form : where is the specularity coefficient , . in accordance with ( 1.5 ) we obtain : the required function and the electric field must decay away from the surface : we assume that the gradient of the electric field is finite and known at the plasma boundary : here , the gradient of the electric field on the plasma boundary is given .the variable will be denoted again by .let s include boundary conditions ( 1.7 ) and ( 1.8 ) in the kinetic equation ( 1.2 ) , and boundary condition ( 1.10 ) include in the electric field equation ( 1.6 ) . as a resultwe will obtain system of equation for skin effect in half space of the plasma : the impedance is determined by formula : with the use of dimensionless field this relation may be rewritten in the form : from equation ( 2.1 ) and boundary conditions ( 1.9 ) we obtain the following expression for : in the case we obtain : then we may rewrite the equation ( 2.1 ) in the form : the solution of the equations ( 2.2 ) and ( 2.3 ) we will seek in the form of fourier integrals : then for the function the following expression may be derived : it smay be proved , that the expression for coincides with the expression for . therefore we have we substitute the expressions ( 2.4 ) , ( 2.5 ) and ( 2.6 ) into the equations ( 2.2 ) and ( 2.3 ) .this procedure leads to characteristic system of equations : the function is an even function .then , and equation ( 2.7 ) may be rewritten as let s substitute the expression ( 2.9 ) into the equation ( 2.8 ) .then we obtain : here is the dispersion function , and is the integral characteristic system consists of two equations ( 2.9 ) and ( 2.10 ) . the integral we will express through the integral exponential function .we will spread out integrand partial fractions : where thus , we receive , that or where s expand the solution of equations ( 2.9 ) , ( 2.10 ) by the following series : functions and may be obtained from the characteristic system . for zero approximationwe have : for first approximation we obtain : for -approximation the following expression may be derived : we may rewrite the expressions ( 3.1 ) in the form in general case when we have : therefore the series ( 3.1 ) constructed may be expressed in the explicit form : .\ ] ]in accordance with ( 2.4 ) and ( 2.5 ) we will construct expressions for electric field and distribution function . using the expressions ( 3.1 ) and ( 3.2 )we obtain : we rewrite the expression ( 4.1 ) with the use of ( 3.8 ) in the following form : .\ ] ] function may be written in the form : \frac{e^{ikx}dk}{z_0+ik\mu } , \eqno{(4.3)}\ ] ] expression ( 4.3 ) may be written also in the next form : .\ ] ] if we know function we may write down the electron distribution function according to the equation ( 1.1 ) .let us consider now the calculation of impedance : we decompose in the following series : here now we write down expressions for zero , first and second approximations for impedance ( see ( 4.4 ) and ( 4.5 ) ) expression for general term of series ( 4.4 ) has the form : the previous sections we have considered the method , leading to exact solution of the skin effect problem with arbitrary specularity coefficient . in the case the method leads to the classical solution ( 4.6 ) of the problem with specular surface conditions ( see , for example , , ) . in classical solution is represented in the form : where the comparison of the expressions ( 5.1 ) and ( 4.6 ) gives . indeed , after the change of variables in the integral we have: now let us consider second approximation : when this solution is exact .maximum deviation from exact solution corresponds to the case when .the exact solution of the problem in the case is also well known , : \frac{d\tau}{\tau^2}\bigg]^{-1}. \eqno{(5.2)}\ ] ] it s convenient to rewrite the expression ( 5.2 ) with the use of our notations : \,dk\bigg]^{-1}.\ ] ] now consider the ratio of real ( and imaginary ) parts of the solutions constructed in zero , first and second approximations to the solution in zero approximation ( ) for the case .the last solution coincide with the solution of the problem with specular scattering boundary conditions .we will build two plots ( curves 1 and 2 ) : and or and also analogous plots for the ratios of the imaginary parts . here is the solution for the case of specular suface conditions .values and correspond to corrections for the first and the second approximations . the curves 3 on the plots correspond the ratios of impedance for diffuse scattering surface condition to impedance for specular scattering surface condition ( ) .the derived method has maximum error in the case of extremely anomalous skin effect , when parameter . in this caseratios defined above are equal to 1.125 .so it is obvious from the plots , that in zero approximation the method error is equal to .for the first approximation in this case we have .so the first approximation error is equal to . for the second approximation in this casewe have .and the second approximation error is equal to .the analysis of plots shows , that the considered impedance ratios in first approximation coincide with the exact solution when . for the second approximation the coincidence is observed when .the effective method of the solution of boundary problems of the kinetic theory is developed .this method is based on symmetric continuation of the electric field and distribution function of electrons . on parameter for the case .the curve corresponds to diffuse scattering boundary conditions , the curves correspond to first and second approximations.,width=321,height=170 ] on parameter for the case .the curve corresponds to diffuse scattering boundary conditions , the curves correspond to first and second approximations.,width=321,height=170 ] | the problem of skin effect with arbitrary specularity in maxwellian plasma with specular diffuse boundary conditions is solved . new analytical method is developed that makes it possible to obtain a solution up to an arbitrary degree of accuracy . the method is based on the idea of symmetric continuation of not only the electric field , but also electron distribution function . the solution is obtained in a form of von neumann series . * keywords : skin effect , specular diffuse boundary conditions , analytical method , von neumann series . * = -15 mm |
the study of the cosmic microwave background ( cmb ) anisotropies is providing strong constraints on theories of structure formation .these theories are statistical in essence , so the extraction of the information must be done in a statistical way . in particular ,the standard method for analysing a cmb experiment is the maximum likelihood estimator ( ml ) .the procedure is straightforward : maximise the probability of the parameters of the model given the data , p(parameters ) , over the allowed parameter space .usually , we take the prior probability for the parameters to be constant , so this is equivalent to maximising the likelihood , p(data ) , via the bayes theorem .the ml method has been widely applied in cmb analyses , for power spectrum or parameters estimation , . when computing the likelihood in these problems , we have to deal with the inversion of the covariance matrix of the data , which usually involves operations , being the number of pixels of the map .the increasing size of the datasets makes this method computationally costfull for new experiments , so other methods have been investigated in the last few years to confront the problem. there have been several proposals on this matter .pioneering work on the problem of power spectrum estimation , based on an evaluation of the s coefficients of the multipole expansion of the observed map in the spherical harmonics basis , have been applied to cobe data .quadratic estimators have been proposed by several authors as statistics that give the same parameters that maximise the likelihood , but requiring less computational work .nevertheless , alternative statistical methods are required in the field to extract the cosmological information from future cmb experiments ( as planck ) where the number of data points will be very large ( see , e.g. for an estimation of the scaling of the computing time with the dataset size ) . here , we propose a new statistical method to analyse a cmb map . in order to illustrate it, we will use the two - point correlation function ( cf ) .we first replace the likelihood of the full map by the likelihood of the fluctuations of an estimator of the cf .then , we derive the cosmological parameters from it in an efficient manner .if we assume gaussianity for the primordial cmb fluctuations , the cf completely characterises the statistical properties of the field . in this line , it has been suggested that it can be used to obtain the power spectrum or for parameter estimation , because it encodes all the relevant information for that purpose .this approach of considering the cf in cmb analyses has been recently used by other authors to estimate the power spectrum .they obtain the cf using different estimators , and integrate it , projecting over the legendre polynomials , to obtain the s .the advantage of this estimator is that it only needs at the most operations to be computed , and not , as is required for the likelihood .we construct a modified version of the standard test , using the cf evaluated at a certain set of points .the estimate of the parameters of the model is given by the minimum of this statistic , as in the standard analysis .we give a very good approximation to the distribution function of this modified , so the confidence limits can be obtained without using simulations , by integration below that curve , as for the ml .we show that , in several problems , choosing a large enough set of points to evaluate the cf , our method has the same power as the maximum likelihood , while being two different methods .in this section we will introduce the test , using for this purpose the two - point cf .nevertheless , all the procedure described below can be applied to any other estimator . for a certain map of the cmb anisotropies , with pixels , and errors , we can estimate the cf , , in a set of angular distances , . in this workwe have used the following estimator , = \frac { \sum _ { i , j \in \ { k \ } } \ ; x_i x_j } { \sum _ { i , j \in \ { k \ } } 1 } \label{est_cf}\ ] ] where stands for the set of all pixel pair such that their angular distance is , but our proposal and techniques can be applied to other estimators for the cf ( see , for example , , or ) .hereafter , we will write the estimate of a certain parameter as ] . represents the discrete cf of the noise .if we have an experiment with uncorrelated noise , this function takes the form this method is a modification of the standard form of a -test , for the case when the error of each of the estimates entering ( [ chi2 ] ) are independent and gaussianly distributed ( hereafter , we mean by standard -test the case when , , and all the terms of the sum in ( [ chi2 ] ) are independent ) . in the present case , the ] are independently distributed .the second sum is due to the correlations between any pair of these variables .it should be noted that equation ( [ rms_ssky ] ) has been obtained assuming that the quantities ] , where we define ) ] , and and are numbers obtained from the s . from ( [ est_m_k ] ) we can infer the general expression for the in the case of several parameters , obtaining ) = \frac{1}{r } \bigg [ \sum_{k , j } j_{k , i } j_{j , i } p_k p_j \frac { c'_{kj } } { \sigma^2(c(\theta_k ) ) \sigma^2(c(\theta_j ) ) } \bigg]^{1/2 } \label{rms_several}\ ] ] where . using the previous equation , we can obtain the quantities for any problem , just minimising the product ) ] , as we have discussed in section 3 .those s can be derived , using the linear approximation to the cf , from equation ( [ rms_several ] ) . in this problem ,the and quantities are given by where stands for , and for .the equation for can be obtained from , just interchanging . in order to check the previous expressions for the , we use 100 of the above mentioned realizations of cobe like maps ( ; ) , and we analyse them using several sets of s . in this way , we can obtain the real value of the , and compare it with the number coming from the formula .the results are summarised in table [ tabla3 ] . in all cases ,the theoretical numbers obtained from equation ( [ rms_several ] ) are in agreement with the numerical results , so we conclude that the linear approximation to the true cf works well in computing the .the largest differences occur when we obtain a large , due to the fact that , in that case , fails the linear approximation to the cf . the average values recovered for and from monte - carlo simulations show that the estimator is unbiased , as we would expect .it should be noticed that , in table [ tabla3 ] , the effective number of degrees of freedom is quite small . in all cases ,we obtain , but we are using .the reason is that the cf contains long - range terms , coming from low multipoles ( ) .this fact reduces the degrees of freedom drastically , so the choice of the will be critical in this problem .the numbers obtained when we consider the whole cf and are compatible with those in bhs94 , but slightly better because we consider the noise of the 4-year cobe map . in that paper, they obtained , using the same galactic cut ( ) , and the noise from the 2-year map , the values ) = 0.96 ] ( in our units ) .nevertheless , we see that considering only the first points , and setting to zero the others , strongly reduces the of the estimate , even below the values obtained when they do not consider noise and incomplete sky coverage ( they have ) = 0.36 ] ) .s for the 4-year _ cobe _ map ( see details in the text ) , using the galactic cut .these values were obtained by numerical minimisation of the product ) \times rms ( e[q_{rms - ps } ] ) ] , and = 15.2 \mu k ] , and = ( 16.0 \pm 4.8 ) \mu k ] .nevertheless , the results are completely general . for our problem, we can write , , and . in figure [ app1 ] we present the function for the value of which gives us the maximum percentage difference between the exact and the approximated functions .this value corresponds to .the largest percentage difference in the distribution function for this case is reached at , and has a value of . in terms of the weights ,we obtain for this point a difference of a ( , and ) .nevertheless , the power of our approximation is that the largest differences always occur at low values of . the asymptotic shape of the exact distribution function is well reproduced , as we need for a analysis .to conclude , we show in figure [ app2 ] the third and fourth order moments , both for the real distribution and the approximation , in the whole range of values for . by definition ,the first and second moments are equal for the true and the approximate distribution .we see again that the approximation follows quite closely the true function , as we have found from the simulations in section 5.1 . with correlations , with terms .we have used , and .we show the case , because for that value we have the largest percentage difference between the true function and our approximation , which is given by and .we can see that the largest differences occur at low values of .the asymptotic values are well reproduced . ] | we present a new general procedure for determining a given set of quantities . to this end , we define certain statistic , that we call modified ( ) , because of its similarity with the standard . the terms of this are made up of the fluctuations of an unbiased estimator of some statistical quantities , and certain weights . only the diagonal terms of the covariance matrix explicitly appear in our statistic , while the full covariance matrix ( and not its inverse ) is implicitly included in the calculation of the weights . choosing these weights we may obtain , through minimising the , the estimator that provides the minimum rms , either for those quantities or for the parameters on which these quantities depend . in this paper , we describe our method in the context of cosmic microwave background experiments , in order to obtain either the statistical properties of the maps , or the cosmological parameters . the test here is constructed out of some estimator of the two - point correlation function at different angles . for the problem of one parameter estimation , we show that our method has the same power as the maximum likelihood method . we have also applied this method to monte carlo simulations of the cobe - dmr data , as well as to the actual 4-year data , obtaining consistent results with previous analyses . we also provide a very good analytical approximation to the distribution function of our statistic , which could also be useful in other contexts . [ firstpage ] methods : statistical cosmology : cosmic microwave background cosmology : cosmological parameters |
consider a population of sequences having a common time ( or location ) index. signals , when they occur , are present in a small fraction of the sequences and aligned in time . in the detection of copy number variants ( cnv ) in multiple dna sequences , efron and zhang used local f.d.r . , zhang et al . and siegmund , yakir and zhang applied scans of weighted -statistics , jeng , cai and li applied higher - criticism test statistics .tartakovsky and veeravalli , mei and xie and siegmund considered the analogous sequential detection of sparse aligned changes of distribution in parallel streams of data , with applications in communications , disease surveillance , engineering and hospital management .these advances have brought in an added multi - sample dimension to traditional scan statistics works ( see , e.g. , the papers in ) that consider a single stream of data . in this paper , we tackle the problem of detectability of aligned sparse signals , extending sparse mixture detection ( cf . ) to aligned signals , and extending multiscale detection ( cf . ) to multiple sequences . hence not surprisingly , we incorporate ideas developed by the sparse mixture and multiscale detection communities to find the critical boundary separating detectable from nondetectable hypotheses . in arias - castro ,donoho and huo , there are also links between sparse mixtures and multiscale detection methods in the detection of a sparse component on an unknown low - dimensional curve within a higher - dimensional space .our work here is less geometrical in nature as the aligned - signal assumption allows us to reduce the problem to one dimension by summarizing across sample first .we supply optimal adaptive max - type tests : penalized scans of the higher criticism and berk jones test statistics .we also supply an optimal bayesian test : an average likelihood ratio ( alr ) that tests against alternatives lying on the critical boundary .the rationale behind the alr is to focus testing at the most sensitive parameter values , where small perturbations can result in sharp differences of detection powers .we state the main results in section [ sec2 ] .we describe the detectable region of aligned sparse signals in the multi - sample setting , and show that the penalized scans achieve asymptotic detection power 1 there .we learn from the detection boundary the surprising result that the requirement to locate the signal in the time domain does not affect the overall difficulty of the detection problem , unless the sequence length to signal length ratio grows exponentially with the number of sequences . in section [ sec3 ], we show the optimality of the alr and consider special cases of our model that have been well studied in the literature using max - type tests : the detection of a signal with unknown location and scale in a single sequence , and the detection of a sparse mixture in many sequences of length 1 .we show that the general form of our alr provides optimal detection in these important special cases .we also illustrate the detectability and detection of multi - sample signals on a cnv dataset . in section [ extensions ] , the detection problem is extended to heteroscedastic signals .the extension illustrates the adaptivity of the penalized scans .even though the detection boundary has to be extended to take into account the heteroscedasticity , the penalized scans as described in section [ sec2 ] are still optimal .on the other hand , the alr tests have to be re - designed to ensure optimality under heteroscedasticity .the model set - up here is similar to that in jeng , cai and li .there optimality is possible without imposing penalties on the scan of the higher - criticism test because the signal length was assumed to be very short .let be a population of sequences .we consider the prototypical set - up under the null hypothesis of no signals , for all and . under the alternative hypothesis of aligned signals, there exists an unknown of disjoint intervals ] with common length is approximately .this ratio has to be factored into the computation of the detection boundary .the main message of section [ sec2.1 ] is that the difficulty of the detection problem is not noticeably affected unless this ratio of sequence length to signal length grows exponentially with the number of sequences .sections [ sec2.2 ] and [ sec2.3 ] provide optimal max - type tests that attain the detection boundary .let if and if for some constant .let be the greatest integer function and the number of elements in a set .let denote expectation under .we are interested here in the signal length in ( [ munt ] ) satisfying the case of varying sub - exponentially with will be considered in section [ extensions ] . we shall show that under ( [ c1 ] ) with , the asymptotic threshold detectable value of when and is \\[-8pt ] \nonumber & & \qquad = \cases { \sqrt{\log \bigl(1+n^{2 \beta-1+\zeta } \bigr ) } , & \quad \vspace*{2pt}\cr ( \sqrt{1-\zeta}-\sqrt{1- \zeta-\beta } ) \sqrt{2 \log n } , & \quad \vspace*{2pt}\cr\sqrt{n^{\beta+\zeta-1 } } , & \quad }\end{aligned}\ ] ] the first case can be further sub - divided into : ( a ) , under which and ( b ) , under which formula ( [ bnbz ] ) specifies the functional form of as a function of . since appears in the exponent in ( [ poly ] ) and in the third case of ( [ bnbz ] ) , is specified only up to multiplicative constants in these cases .the boundary is an extension of the donoho ingster jin boundary . in the case of a sparse mixture , , and ( [ c1 ] ) is satisfied with . by the second case in ( [ bnbz ] ) and by ( [ grow ] ) , whenfurthermore , in ( [ poly ] ) recovers the detection boundary in the dense case established by cai , jeng and jin .formula ( [ bnbz ] ) likewise recovers the detection boundary for the special case of only one sequence . for the scaled mean in ( [ munt ] ) , this boundary is known to be and is attained by the penalized scan ; see , for example , . to see how this special case is subsumed in the general setting above , set so that it suffices to consider in ( [ c1 ] ) to parametrize the scale of the signal . then set so that the signal is present in each of the sequences .since the signals are aligned and have the same means , by sufficiency one can equivalently consider the one sequence of length obtained by summing the over .dividing by to restore unit variance and formally plugging into ( [ poly ] ) gives a detection threshold for of .this yields the above detection threshold for the one sequence problem apart from the multiplicative constant , which can be recovered with a more refined analysis in ( [ bnbz ] ) .the general formula ( [ bnbz ] ) shows how the growth coefficient and the phase transitions of the growth are altered by the effect of multiple comparisons in the location of signals .the formula also shows that in the case , the signal detection thresholds can grow polynomially with .[ thmm1 ] assume that ( [ munt ] ) and ( [ c1 ] ) hold for , with and for some and .under these conditions , there is no test that can achieve , at all , , the simple likelihood ratio of , for against ( [ munt ] ) , is , where with .the key to proving theorem [ thmm1 ] ( details in section [ sec5 ] ) is to show that under the conditions of theorem [ thmm1 ] , that is , the likelihood ratio of the signal does not grow fast enough to overcome the noise due to the independent comparisons of length .theorem [ thmm1 ] follows because the likelihood ratio test is the most powerful test . as an illustration ,first consider sparse mixture detection .that is , let and test for : against : and .let be the ordered -values of the s .donoho and jin proposed to separate from by applying tukey s higher - criticism test statistic they showed that the higher - criticism test is optimal for sparse mixture detection . under , ; see , theorem 1 . under , the argument of at some is asymptotically larger than , when for some , and lies above the detection boundary . for lying below the detection boundary ,it is not possible to separate from .cai et al . showed that optimality extends to .we motivate the extension of the higher - criticism test to by first considering a fixed , known signal on the interval ] . in theorem [ thmm2b ] below , we shall show that analogously to ( [ phc ] ) , the penalized berk jones test statistic is optimal for aligned signals detection when the signal locations are unknown . [ thmm2b ] assume ( [ munt ] ) and that for some , ( [ c1 ] ) holds and , for some and . under these conditions , can be achieved by testing with . as in section [ sec2.2 ] ,a sequential approach can be used to identify signals when the penalized berk jones exceeds a specified threshold .we shall introduce in section [ sec3.1 ] an alr that is optimal for detecting multi - sample aligned signals .we then consider the special cases of detecting a sparse mixture ( with ) in section [ sec3.2 ] and multiscale detection in a single sequence ( with ) in section [ sec3.3 ] .the alr builds upon the likelihood ratios as defined in ( [ lnlj ] ) , first by substituting by its asymptotic threshold detectable value , followed by integrating over and finally by summing over an approximating set for and . in view of ( [ c1 ] ) , let ] , , such that for the interval , , \nonumber \\[-8pt ] \\[-8pt ] \nonumber i_n^{(k ) } & \sim & \operatorname { bernoulli } \bigl ( \pi_n^{(k ) } \bigr),\end{aligned}\ ] ] and otherwise , with , and . we shall denote by , by and so forth .let be such that , and for and , let [ thmm6 ] assume ( [ x2 ] ) and ( [ unt ] ) .if for all , ( [ c1 ] ) holds and , for some and , then there is no test that can achieve , at all , , conversely , if for some , ( [ c1 ] ) holds and , for some and , then ( [ pp0 ] ) can be achieved by the penalized hc and bj tests .it can be checked that setting will recover for us the boundary for aligned signals in jeng et al .incidentally , they assumed that which effectively brings us to the case .corollary [ cor2 ] below extends the optimality of the hc test in jeng et al . to multiscale signal lengths , by introducing the penalty terms as described in section [ sec2 ] . in place of ( [ c1 ] ) , let .[ cor2 ] assume ( [ x2 ] ) and ( [ unt ] ) .theorem [ thmm6 ] holds under ( [ poly2 ] ) with and .we say that if and , and that if . we start with the proof of theorem [ thmm1 ] in section [ sec5.1 ] , that detection is asymptotically impossible below the detection boundary , followed by the proofs of theorem [ thmm2 ] ( in section [ sec5.2 ] ) and theorem [ thmm3 ] ( in section [ sec5.3 ] ) , that the average likelihood ratio test is optimal .these proofs are consolidated in this section as they are unified by a likelihood ratio approach .since the detection problem is easier when compared to , we may assume without loss of generality that under in all the proofs . for claim of the theorem reduces to theorems proved by ingster in the sparse case and by cai , jeng and jin in the dense case .let , and set , so , set and .let , and let each , , be of the form , with all ] .let , \qquad l_i = \prod_{n=1}^n l_{ni}.\ ] ] since ] , ( [ pan ] ) follows from ( [ clt ] ) ._ case _ 2 : , where , , .let and define , for , .\ ] ] check that \\& = & 1+\pi_n \bigl[e^{\mu_n^2 } \phi(x \sqrt{2 \log n}-2 \mu_n)-\phi(x \sqrt{2 \log n}-\mu_n ) \bigr].\end{aligned}\ ] ] since and when , it follows that ^n\nonumber \\ & \leq & \bigl[1+\pi_n^2 e^{\mu_n^2 } \phi(x \sqrt{2 \log n}-2 \mu_n ) \bigr]^n\nonumber \\[-8pt ] \\[-8pt ] \nonumber & = & \bigl[1+o \bigl(n^{2(y^2-x^2)-2 \varepsilon+2(x - y)^2-(2y - x)^2 } \bigr)/ \sqrt{\log n } \bigr]^n \\ & = & \exp \bigl[o \bigl(n^{1-x^2 - 2 \varepsilon } \bigr)/\sqrt{\log n } \bigr ] = \exp \bigl[o \bigl(n^{\zeta-2 \varepsilon } \bigr)/\sqrt{\log n } \bigr ] .\nonumber\end{aligned}\ ] ] next we apply to show that \\[-8pt ] \nonumber & = & o_p \bigl(n^{\zeta-\varepsilon } \sqrt{\log n } \bigr ) .\nonumber\end{aligned}\ ] ] by ( [ 5.6 ] ) , ( [ 5.8 ] ) and , ( [ l1 ] ) holds for .let .let { \mathbf i}_{\ { y_{ni } \leq x \sqrt{2 \log n } \}} ] for , it follows that \\[-8pt ] \nonumber & \geq & \biggl ( \frac{e_0({\widetilde}l_{1i}^2)}{[e_0({\widetilde}l_{1i})]^2}-1 \biggr ) \kappa_n^2 \sim \operatorname{var}_0({\widetilde}l_{1i } ) \kappa_n^2,\end{aligned}\ ] ] and by ( [ e0 t ] ) , \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & { } - \bigl[e_0({\widetilde}l_{1i}-1 ) \bigr]^2 \\ & \sim & c_1 n^{\zeta-1 - 2 \varepsilon}/\sqrt{\log n } , \nonumber\end{aligned}\ ] ] where ^{-1} ] , and so ( [ pan ] ) holds , but with replaced by .the variability of is larger than that of , and hence ( [ pan ] ) for holds as well ._ case _ 3 : , , with .let , , and define as in ( [ ln1h ] ) . since , it follows that we next apply the inequality to show that n \pi_n e \bigl(\mu_n y_{11}- \mu_n^2/2|i_1=1 \bigr ) \stackrel{p } { \sim } n^{\zeta-\varepsilon}.\ ] ] it follows from ( [ p1log ] ) , ( [ logln11 ] ) and that ( [ l1 ] ) holds for .let and for all .then hence ^{-1} ] , and } { 1+n^{-\beta } [ \exp(\mu_n z_{n1}-\mu_n^2/2)-1 ] } \biggr).\end{aligned}\ ] ] since for , it follows that \\[-8pt ] \nonumber & \stackrel{p } { \sim } & n \pi_n \phi(-\mu_n ) \times \cases { n^{\beta-1+\zeta } , & \quad \vspace*{2pt}\cr n^{-\beta } \sqrt{2 } , & \quad \vspace*{2pt}\cr n^{-\beta } e^{3 \mu_n^2/2 } , & \quad \vspace*{2pt}\cr \log2 , & \quad }\end{aligned}\ ] ] in the above, we apply the relation \sim n^{-\beta } ( e^{\kappa \mu_n^2/2}-1) ] to show that and to show that it follows from ( [ elogln1 ] ) and ( [ elogln12 ] ) that .since , we can conclude ( [ n3zeta ] ) from ( [ logl11n ] ) ._ case _ 2 : , where , , .define { \mathbf i}_{\ { z_{n1 } \leq x \sqrt{2 \log n } \ } } , \\ { \widetilde}l_1 ^ 0 & = & \prod_{n=1}^n { \widetilde}l_{n1 } , \\{ \widetilde}l_1 ^ 1 & = & \prod_{n : i_n=1 } \biggl ( \frac{1+n^{-\beta } [ \exp ( \mu_n y_{n1}-\mu_n^2/2)-1 ] { \mathbf i}_{\ { z_{n1 } \leq x \sqrt{2 \log n } \ } } } { 1+n^{-\beta } [ \exp(\mu_n z_{n1}-\mu _n^2/2)-1]{\mathbf i}_{\ { z_{n1 } \leq x \sqrt{2 \log n } \ } } } \biggr).\end{aligned}\ ] ] by ( [ vincrease ] ) , }{1+n^{-\beta } [ \exp(\mu_n x \sqrt{2 \log n}-3 \mu _ n^2/2)-1 ] } \biggr ) \\& \sim & n \pi_n \phi(-y \sqrt{2 \log n } ) \log2 \sim c ( \log2 )n^{\zeta+\varepsilon}/\sqrt{\log n } , \nonumber\end{aligned}\ ] ] where .recall that ^{-1} ] to show that \nonumber \\ & & \qquad\quad { } - \bigl[1/2+o(1 ) \bigr ] n^{-2 \beta } \bigl[e^{\mu_n^2 } \phi \bigl((2y - x ) \sqrt{2 \log n } \bigr ) \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \hspace*{98pt}\qquad\quad { } -2 \phi(y \sqrt{2 \log n})+\phi(x \sqrt{2 \log n } ) \bigr ] \\ & & \qquad = - \bigl[1+o(1 ) \bigr ] cn^{-\beta - y^2}/\sqrt{\log n}- \bigl[1/2+o(1 ) \bigr ] c_1 n^{-x^2}/\sqrt{\logn } \nonumber \\ & & \qquad = - \bigl[1+o(1 ) \bigr ] ( c+c_1/2 ) n^{\zeta-1}/\sqrt{\log n } , \nonumber\end{aligned}\ ] ] and to show that it follows from ( [ elogt ] ) and ( [ elogt2 ] ) that if and if . since , we can conclude ( [ n3zeta ] ) from ( [ loghex ] ) . _ case _ 3 : , , .the inequality \bigr\}\ ] ] leads to n \pi_n \bigl[\mu_ne_1(y_{n1})- \mu_n^2/2-\beta\log n \bigr ] \\ & = & -2n^{1-\beta}+ \bigl[1+o_p(1 ) \bigr ] n^{\zeta+\varepsilon}/2 \stackrel { p } { \sim } n^{\zeta+\varepsilon}/2,\end{aligned}\ ] ] and from this , we can conclude ( [ n3zeta ] ) .let and let be such that ( [ llr ] ) holds .hence \sqrt { \ell_t^*/ \ell_t } = \bigl[1-o \bigl(r^{-1/2 } \bigr ) \bigr ] \bigl[b_t(\ell_t)+c_t \bigr].\ ] ] since and , it follows from ( [ e1ystar ] ) that we check that under ( [ ctprime ] ) , }{[\log ( t/\ell_t^*)]^3 ( t/\ell_t^ * ) } \stackrel{p } { \rightarrow } \infty.\ ] ] since ^ 3(t/\ell_t)) ] , where denotes the upper -quantile of the standard normal . by the chernoff hoeffding inequality , and hence by bonferroni s inequality , by ( [ brt ] ) , .since for , so , and by ( [ pbj ] ) and ( [ bjcs ] ) , and ( [ l1 ] ) holds .proof of theorem [ thmm2b ] let \in b_{r , t} ] is small relative to ( [ e1s ] ) .hence ( [ pk ] ) holds because when , and this leads to ( [ 1ak ] ) with replaced by , and then to ( [ pk ] ) .1(b ) : , ( , where ) . let .then \\[-8pt ] \nonumber & \sim & ( c \sqrt{2 \pi\log n})^{-1 } n^{-2 \beta-(\zeta -1)/2+\varepsilon } , \\\label{1bv } \operatorname{var}_1 \bar s_n & \sim & n^{-1 } \bigl[t_n+\pi_n \phi \bigl(-b_n ( \beta,\zeta ) \bigr ) \bigr].\end{aligned}\ ] ] we claim that a consequence of ( [ 1btn])([1bv ] ) is that by ( [ 1bdiff2 ] ) , and hence by ( [ 1btn ] ) , the solution in of satisfies ,\ ] ] where if and if . hence by ( [ 1bgg ] ) and , as and , ( [ pk ] ) holds .it remains for us to show ( [ 1bdiff2 ] ) . by ( [ 1btn ] ) ,the exponent of in is , which is smaller than the exponent in of ; see ( [ 1bdiff ] ) .therefore , the leading exponent of in var is ,\ ] ] and therefore by ( [ 1bdiff ] ) , var .this , together with ( [ 6.46 ] ) , implies ( [ 1bdiff2 ] ) ._ case _ 2 : , , where , .let ] [ for case 1 and for case 2 ] such that hence by for , since , the type ii error probability indeed goes to zero .we prove theorem [ thmm6 ] here and in sections [ sec7.1 ] and [ sec7.2 ] . in section [ sec7.3 ] , we prove corollary [ cor2 ] .let , , and for , let , with all ] .let , where - 1 \biggr\}\ ] ] is the likelihood ratio of and .below , we go over the relevant cases to show that there exists satisfying ( [ l1 ] ) and ( [ pan ] ) when .this implies that there is no test able to achieve ( [ pp0 ] ) .we shall only consider as the case has been covered in theorem [ thmm1 ] ._ case _ 1 : ( ) , .. by ( [ lni6 ] ) , n^{\zeta -1 - 2 \varepsilon},\ ] ] and therefore ( [ l1 ] ) holds with n^{\zeta-2 \varepsilon } \}) ] for , and therefore , by ( [ vnvar ] ) , \kappa_n^2 \operatorname{var}_0({\widetilde}l_{1i}).\ ] ] by ( [ m01 ] ) and a change - of - measure argument , we check lyapunov s condition to conclude ( [ lya2 ] ) and ( [ pan ] ) . by lemma [ lem1 ] ,setting leads to . to show , it suffices to find such that , and for some and , where and , .1(a ) : , . let .then \sim c_3 n^{-\beta+\varepsilon/2},\ ] ] where , and since therefore .since when and , ( [ k8 ] ) holds for . _ case _1(b ) : ( ) , , where . let .then where , and \nonumber \\[-8pt ] \\[-8pt ] \nonumber & \sim & c_6 n^{-c_4 ^ 2(1+\tau)/[2(1-\tau)]^2-\beta+\varepsilon}/\sqrt { \log n},\end{aligned}\ ] ] where .we claim that \biggr ) = o \bigl((e_1 \bar s_n)^2 \bigr).\ ] ] by ( [ 8tn])([8claim ] ) and , \\[-8pt ] \nonumber & = & c_7 n^{\zeta-1 + 2 \varepsilon}/\sqrt{\log n}\end{aligned}\ ] ] for some .check that the inequality reduces to . therefore by ( [ 8tn ] ) , for some , and the root of satisfies . since as and , ( [ 8quad ] ) implies ( [ k8 ] ) .it remains to show ( [ 8claim ] ) by comparing the leading exponent in of the terms .that is , it remains to show that summarized as .the inequality reduces to , which holds trivially , whereas reduces to which holds because when , and it is assumed that . hence ( [ 8claim ] ) holds ._ case _ 2 : , , where and .let ] .then for some .since , therefore the exponent of in ( [ 3sn ] ) is at least , and so .we apply the first relation in ( [ c9 ] ) to conclude ( [ k8 ] ) , for both and . by lemma [ lem2 ], setting leads to .let it was shown in ( [ k8 ] ) that in each case above , for some . since for and , .consider first the penalized hc test . by lemma [ lem2 ] , setting leads to . in the case , the arguments above and in theorem [ thmm2a ] show that by ( [ poly2 ] ) , , and therefore , and ( [ pp0 ] ) holds . by similar arguments , ( [ pp0 ] )holds for the penalized bj test .we check in particular lyapunov s condition to conclude ( [ clt ] ) .let to be specified .it follows from taylor s expansion that for some chosen large enough . if , then u^{2+\delta } = ( 3u)^{2+\delta}.\ ] ] by combining ( [ a1 ] ) and ( [ a2 ] ) , we conclude that where .we apply ( [ a3 ] ) with ] for , therefore ,\ ] ] where .check that , and that . for , apply ( [ qxt ] ) , ( [ c2 ] ) and on to show that the right inequality of ( [ kq2 ] ) holds . for , apply ( [ qxt ] ) , ( [ c2 ] ) and on to show that the right inequality of ( [ kq2 ] ) again holds .let , where are i.i.d . . assume that there exists such that for the sequence , , there is an unknown interval ] disjoint from each other , and from ] of them having mean .therefore by the results in , the critical detectable is .hence when , ( [ d2 ] ) can not be achieved . note that the assumption in ( [ tlt ] ) implies that , so is well - defined .we would like to thank an associate editor and two referees for their comments that have led to a more realistic model setting and the adaptive optimal test statistics in this paper . | we describe , in the detection of multi - sample aligned sparse signals , the critical boundary separating detectable from nondetectable signals , and construct tests that achieve optimal detectability : penalized versions of the berk jones and the higher - criticism test statistics evaluated over pooled scans , and an average likelihood ratio over the critical boundary . we show in our results an inter - play between the scale of the sequence length to signal length ratio , and the sparseness of the signals . in particular the difficulty of the detection problem is not noticeably affected unless this ratio grows exponentially with the number of sequences . we also recover the multiscale and sparse mixture testing problems as illustrative special cases . ./style / arxiv - general.cfg |
the _ virtual element method _ ( vem ) , introduced in , is a recent generalization of the finite element method , which is characterized by the capability of dealing with very general polygonal / polyhedral meshes .the interest in numerical methods that can make use of general polytopal meshes has recently undergone a significant growth in the mathematical and engineering literature ; among the large number of papers on this subject , we cite as a minimal sample . indeed, polytopal meshes can be very useful for a wide range of reasons including meshing of the domain , automatic use of hanging nodes , moving meshes and adaptivity .vem has been applied successfully in a large range of problems ; see for instance .the object of this paper is to introduce and analyze an a posteriori error estimator of residual type for the virtual element approximation of the steklov eigenvalue problem .in fact , due to the large flexibility of the meshes to which the virtual element method is applied , mesh adaptivity becomes an appealing feature as mesh refinement strategies can be implemented very efficiently .for instance , hanging nodes can be introduced in the mesh to guarantee the mesh conformity without spreading the refined zones .in fact hanging nodes introduced by the refinement of a neighboring element are simply treated as new nodes since adjacent non matching element interfaces are perfectly acceptable . on the other hand , polygonal cells with very general shapesare admissible thus allowing us to adopt simple mesh coarsening algorithms .the approximation of eigenvalue problems has been the object of great interest from both the practical and theoretical points of view , since they appear in many applications .we refer to and the references therein for the state of art in this subject area .in particular , the steklov eigenvalue problem , which involves the laplace operator but is characterized by the presence of the eigenvalue in the boundary condition , appears in many applications ; for example , the study of the vibration modes of a structure in contact with an incompressible fluid ( see ) and the analysis of the stability of mechanical oscillators immersed in a viscous media ( see ) .one of its main applications arises from the dynamics of liquids in moving containers , i.e. , sloshing problems ( see ) . on the other hand , adaptive mesh refinement strategies based on a posteriori error indicatorsplay a relevant role in the numerical solution of partial differential equations in a general sense .for instance , they guarantee achieving errors below a tolerance with a reasonable computer cost in presence of singular solutions .several approaches have been considered to construct error estimators based on the residual equations ( see and the references therein ) . in particular , for the steklov eigenvalue problem we mention . on the other hand , the design and analysis of a posteriori error bounds for the vem is a challenging task .references are the only a posteriori error analyses for vem currently available in the literature . in ,a posteriori error bounds for the -conforming vem for the two - dimensional poisson problem are proposed . in ,a posteriori error bounds are introduced for the -conforming vem proposed in for the discretization of second order linear elliptic reaction - convection - diffusion problems with non constant coefficients in two and three dimensions .we have recently developed in a virtual element method for the steklov eigenvalue problem . under standard assumptions on the computational domain, we have established that the resulting scheme provides a correct approximation of the spectrum and proved optimal order error estimates for the eigenfunctions and a double order for the eigenvalues . in order to exploit the capability of vem in the use of general polygonal meshes and its flexibility for the application of mesh adaptive strategies, we introduce and analyze an a posteriori error estimator for the virtual element approximation introduced in .since normal fluxes of the vem solution are not computable , they will be replaced in the estimators by a proper projection . as a consequence of this replacement, new additional terms appear in the a posteriori error estimator , which represent the virtual inconsistency of vem .similar terms also appear in the other papers for a posteriori error estimates of vem ( see ) .we prove that the error estimator is equivalent to the error and use the corresponding indicator to drive an adaptive scheme .the outline of this article is as follows : in section [ sec : apriori ] we present the continuous and discrete formulations of the steklov eigenvalue problem together with the spectral characterization .then , we recall the a priori error estimates for the virtual element approximation analyzed in . in section [ sec : posteriorierror ] , we define the a posteriori error estimator and proved its reliability and efficiency .finally , in section [ sec : numerejemplo ] , we report a set of numerical tests that allow us to assess the performance of an adaptive strategy driven by the estimator . we have also made a comparison between the proposed estimator and the standard edge - residual error estimator for a finite element method . throughout the article we will denote by a generic constant independent of the mesh parameter , which may take different values in different occurrences .let be a bounded domain with polygonal boundary .let and be disjoint open subsets of such that with .we denote by the outward unit normal vector to .we consider the following eigenvalue problem : find , , such that \dfrac{\partial w}{\partial n } = \left\{\begin{array}{ll } \l w & \text{on } \go , \\ 0 & \text{on } \g_1 . \end{array}\right . \end{array}\right.\ ] ] by testing the first equation above with and integrating by parts , we arrive at the following equivalent weak formulation : [ p1 ] find , , such that according to ( * ? ? ? * theorem 2.1 ) , we know that the solutions of the problem above are : * , whose associated eigenspace is the space of constant functions in ; * a sequence of positive finite - multiplicity eigenvalues such that .the eigenfunctions corresponding to different eigenvalues are orthogonal in .therefore the eigenfunctions corresponding to satisfy we denote the bounded bilinear symmetric forms appearing in problem [ p1 ] as follows : let be a sequence of decompositions of into polygons .we assume that for every mesh , and are union of edges of elements .let denote the diameter of the element and the maximum of the diameters of all the elements of the mesh , i.e. , . for the analysis, we will make as in the following assumptions . *every mesh consists of a finite number of _ simple _ polygons ( i.e. , open simply connected sets with non self intersecting polygonal boundaries ) . * * a2 .* there exists such that , for all meshes , each polygon is star - shaped with respect to a ball of radius greater than or equal to . ** there exists such that , for all meshes , for each polygon , the distance between any two of its vertices is greater than or equal to .we consider now a simple polygon and , for , we define we then consider the finite - dimensional space defined as follows : where , for , we have used the convention that .we choose in this space the degrees of freedom introduced in ( * ? ? ?* section 4.1 ) . finally ,for every decomposition of into simple polygons and for a fixed , we define in what follows , we will use standard sobolev spaces , norms and seminorms and also the broken -seminorm which is well defined for every such that for each polygon .we split the bilinear form as follows : where due to the implicit space definition , we must have into account that we would not know how to compute for .nevertheless , the final output will be a local matrix on each element whose associated bilinear form can be exactly computed whenever one of the two entries is a polynomial of degree .this will allow us to retain the optimal approximation properties of the space . with this end , for any and for any sufficiently regular function ,we define first where , , are the vertices of .then , we define the projector for each as the solution of on the other hand , let be any symmetric positive definite bilinear form to be chosen as to satisfy for some positive constants and independent of . then , set where is the bilinear form defined on by notice that the bilinear form has to be actually computable for .the following properties of have been established in ( * ? ? ?* theorem 4.1 ) . * _ -consistency _ : * _ stability _ : there exist two positive constants and , independent of , such that : now , we are in a position to write the virtual element discretization of problem [ p1 ] .[ p11 ] find , , such that according to ( * ? ? ?* theorem 3.1 ) we know that the solutions of the problem above are : * , whose associated eigenfunction are the constant functions in .* , with , which are positive eigenvalues repeated according to their respective multiplicities .moreover , the eigenfunctions corresponding to different eigenvalues are orthogonal in . therefore the eigenfunctions corresponding to satisfy let be a solution to problem [ p1 ] .we assume is a simple eigenvalue and we normalize so that . then , for each mesh , there exists a solution of problem [ p11 ] such that , and as .moreover , according to and , we have that and belong to the space let us remark that the following generalized poincar inequality holds true in this space : there exists such that the following a priori error estimates have been proved in ( * ? ? ?* theorems 4.24.4 ) : there exists such that for all where the constant is the sobolev exponent for the laplace problem with neumann boundary conditions .let us remark that , if is convex , and with being the largest re - entrant angle of , otherwise .the aim of this section is to introduce a suitable residual - based error estimator for the steklov eigenvalue problem which be fully computable , in the sense that it depends only on quantities available from the vem solution .then , we will show its equivalence with the error .for this purpose , we introduce the following definitions and notations . for any polygon ,we denote by the set of edges of and we decompose , where , and . for each inner edge and for any sufficiently smooth function , we define the jump of its normal derivative on by \!\!\right]_\ell:=\nabla ( v|_{\e } ) \cdot n_{\e}+\nabla ( v|_{\e ' } ) \cdot n_{\e ' } , \ ] ] where and are the two elements in sharing the edge and and are the respective outer unit normal vectors . as a consequence of the mesh regularity assumptions , we have that each polygon admits a sub - triangulation obtained by joining each vertex of with the midpoint of the ball with respect to which is starredlet .since we are also assuming * a3 * , is a shape - regular family of triangulations of .we introduce bubble functions on polygons as follows ( see ) .an interior bubble function for a polygon can be constructed piecewise as the sum of the cubic bubble functions for each triangle of the sub - triangulation that attain the value 1 at the barycenter of each triangle . on the other hand ,an edge bubble function for is a piecewise quadratic function attaining the value 1 at the barycenter of and vanishing on the triangles that do not contain on its boundary .the following results which establish standard estimates for bubble functions will be useful in what follows ( see ) .[ burbujainterior ] for any , let be the corresponding interior bubble function . then , there exists a constant independent of such that [ burbuja ] for any and , let be the corresponding edge bubble function .then , there exists a constant independent of such that moreover , for all , there exists an extension of ( again denoted by ) such that [ extencion ] a possible way of extending from to so that lemma [ burbuja ] holds is as follows : first we extend to the straight line using the same polynomial function .then , we extend it to the whole plain through a constant prolongation in the normal direction to . finally , we restrict the latter to .the following lemma provides an error equation which will be the starting point of our error analysis .from now on , we will denote by the eigenfunction error and by \!\!\right]_{\ell},\quad & \ell\in \ce_{\o } , \\[0.4 cm ] \l_h w_h-\dfrac{\partial ( \pik w_{h})}{\partial n},\quad & \ell\in\ce_{\g_0 } , \\[0.4 cm ] -\dfrac{\partial ( \pik w_{h})}{\partial n},\quad & \ell\in\ce_{\g_1 } , \end{array}\right.\ ] ] the edge residuals . notice that are actually computable since they only involve values of on ( which are computable in terms of the boundary degrees of freedom ) and which is also computable .[ ext2 ] for any , we have the following identity : .\ ] ] using that is a solution of problem [ p1 ] , adding and subtracting and integrating by parts , we obtain \\ & = \l b(w , v)-\sum_{\e\in \ct_h}a^\e(w_h-\pik w_h , v ) -\sum_{\e\in \ct_h}\left[-\int_{\e}\delta(\pik w_h)\,v + \int_{\partial \e}\dfrac{\partial(\pik w_{h } ) } { \partial{n}}\,v\right ] \\ &= \l b(w , v)-\sum_{\e\in\ct_h}a^\e(w_h-\pik w_h , v ) \\ & \quad + \sum_{\e\in\ct_h}\left[\int_{\e}\delta(\pik w_h)\,v -\!\!\!\!\!\!\sum_{\ell\in\ce_{\e}\cap(\ce_{\go}\cup\ce_{\g_1 } ) } \int_{\ell}\dfrac{\partial(\pik w_{h})}{\partial{n}}\,v + \dfrac{1}{2}\sum_{\ell\in\ce_{\e}\cap\ce_{\o } } \int_{\ell}\left[\!\!\left[\dfrac{\partial(\pik w_{h } ) } { \partial{n}}\right]\!\!\right]_{\ell}v\right].\end{aligned}\ ] ] finally , the proof follows by adding and subtracting the term . for all , we introduce the local terms and and the local error indicator by we also introduce the global error estimator by [ rkk ] the indicators include the terms which do not appear in standard finite element estimators .this term , which represent the virtual inconsistency of the method , has been introduced in for a posteriori error estimates of other vem .let us emphasize that it can be directly computed in terms of the bilinear form .in fact , first , we provide an upper bound for the error .[ erroipo ] there exists a constant independent of such that , there exists satisfying ( see ( * ? ? ?* proposition 4.2 ) ) then , we have that }_{t_{2}}\\ & -\underbrace{\sum_{\e\in \ct_h}a^{\e } ( w_h-\pik w_h , e - e_i)}_{t_{3}}+\underbrace{a_h(w_h , e_i)-a ( w_h , e_i)}_{t_{4 } } , \end{split}\ ] ] the last equality thanks to lemma [ ext2 ] .next , we bound each term separately . for , we use the definition of , the fact that , a trace theorem and to write for , first , we use a local trace inequality ( see ( * ? ? ?* lemma 14 ) ) and to write hence , using again , we have \\\nonumber & \leq c\sum_{\e\in \ct_h}\left[h_\e\|\delta ( \pik w_h)\|_{0,\e}\|e\|_{1,\e}+\sum_{\ell \in \ce_{\e}}h_\e^{1/2}\|j_{\ell}\|_{0,\ell}\|e\|_{1,\e}\right]\\ & \leq c\left\{\sum_{\e\in \ct_h}\left[h_\e^2\|\delta ( \pik w_h)\|_{0,\e}^2+\sum_{\ell \in \ce_{\e}}h_\e\|j_{\ell}\|_{0,\ell}^2\right]\right\}^{1/2}|e|_{1,\o},\end{aligned}\ ] ] where for the last estimate we have used . to bound , we use the _ stability _ property and to write where for the last estimate we have used remark [ rkk ] and again .finally , to bound , we add and subtract on each and use the _ -consistency _ property : \\\nonumber & \leq\sum_{\e\in \ct_h}a_h^\e(w_h-\pik w_h , w_h-\pik w_h)^{1/2}a_{h}^\e(e_{i},e_i)^{1/2}\\\nonumber & \quad+\sum_{\e\in \ct_h}a^\e(w_h-\pik w_h , w_h-\pik w_h)^{1/2}a^\e(e_{i},e_i)^{1/2}\\\nonumber & \leq c\sum_{\e\in \ct_h}a_h^\e(w_h-\pik w_h , w_h-\pik w_h)^{1/2}|e_{i}|_{1,\e}\\ & \leq c \left(\sum_{\e\in \ct_h}\theta_{\e}^{2}\right)^{1/2}|e|_{1,\o},\end{aligned}\ ] ] where we have used the _ stability _ property , and for the last two inequalities. thus , the result follows from .although the virtual approximate eigenfunction is , this function is not known in practice .instead of , what can be used as an approximation of the eigenfunction is , where is defined for by notice that is actually computable .the following result shows that an estimate similar to that of theorem [ erroipo ] holds true for .[ corolario2 ] there exists a constant independent of such that for each polygon , we have that now , using together with remark [ rkk ] , we have that thus , the result follows from theorem [ erroipo ] . in what follows ,we prove a convenient upper bound for the eigenvalue approximation .[ cotalambda ] there exists a constant independent of such that from the symmetry of the bilinear forms together with the facts that for all , for all and , we have \\ & \leq c\left[|w - w_h|_{1,\o}^2+|a_h(w_h , w_h)-a(w_h , w_h)|\right],\end{aligned}\ ] ] where we have also used a trace theorem and .we now bound the last term on the right - hand side above using the definition of and : -\sum_{\e\in\ct_h}a^\e(w_h , w_h)\right| \\ & \,\ , \leq\left|\sum_{\e\in\ct_h } \left[a^\e\big(\pik w_h,\pik w_h\big ) -a^\e(w_h , w_h)\right]\right|+\sum_{\e\in\ct_h } c_1\,a^\e\big(w_h-\pik w_h , w_h-\pik w_h\big ) \\ & \,\ , = \sum_{\e\in\ct_h}\left(1+c_1\right ) a^\e\big(w_h-\pik w_h , w_h-\pik w_h\big)\\ & \leq \left(1+c_1\right)\sum_{\e\in\ct_h}\left(\left|w_h - w\right|_{1,\e}^{2 } + \left|w-\pik w_{h}\right|_{1,\e}^{2}\right).\end{aligned}\ ] ] finally , from the above estimate and we obtain .according to and , it seems reasonable to expect the term in the estimate of theorem [ erroipo ] to be of higher order than and hence asymptotically negligible .however this can not be rigorously derived from and , which are only upper error bounds .in fact , the actual error could be in principle of higher order than the estimate .our next goal is to prove that the term is actually asymptotically negligible in the estimates of corollaries [ corolario2 ] and [ cotalambda ] . with this aim, we will modify the estimate and prove that this proof is based on the arguments used in section 4 from . to avoid repeating them step by step , in what follows we will only report the changes that have to be made in order to prove .we define in the bilinear form , which is elliptic ( * ? ? ?* lemma 2.1 ) .let be the solution of since we have that we also define in the bilinear form , which is elliptic uniformly in ( * ? ? ?* lemma 3.1 ) .let be the solution of the arguments in the proof of lemma 4.3 from can be easily modified to prove that then , using this estimate in the proof of theorem 4.4 from yields now , since as stated above , we have that to estimate the third term we recall first that then , subtracting this equation divided by from we have that hence , from the uniform ellipticity of in , we obtain therefore the last inequality because of poincar inequality .then , substituting and into we obtain where we have used for the last inequality . substituting this and estimate into we obtain finally , substituting the above estimate and into , we conclude the proof of the following result .[ asintotico ] there exists independent of such that using this result , now it easy to prove that the term in corollaries [ corolario2 ] and [ cotalambda ] is asymptotically negligible .in fact , we have the following result .there exist positive constants and such that , for all , there holds from lemma [ asintotico ] and corollary [ corolario2 ] we have hence , it is straightforward to check that there exists such that for all holds true . on the other hand , from lemma [ asintotico ]and we have that for all then , for small enough , follows from corollary [ cotalambda ] and the above estimate .we will show in this section that the local error indicators are efficient in the sense of pointing out which polygons should be effectively refined .first , we prove an upper estimate of the volumetric residual term .[ eficiencia1 ] there exists a constant independent of such that for any , let be the corresponding interior bubble function .we define .since vanishes on the boundary of , it may be extended by zero to the whole domain .this extension , again denoted by , belongs to and from lemma [ ext2 ] we have since , using lemma [ burbujainterior ] and the above equality we obtain where , for the last inequality , we have used again lemma [ burbujainterior ] and together with remark [ rkk ] .multiplying the above inequality by allows us to conclude the proof .next goal is to obtain an upper estimate for the local term [ cotar ] there exists independent of such that from the definition of together with remark [ rkk ] and estimate we have the proof is complete .the following lemma provides an upper estimate for the jump terms of the local error indicator .there exists a constant independent of such that [ lema4 ] where .first , for , we extend to the element as in remark [ extencion ] .let be the corresponding edge bubble function .we define .then , may be extended by zero to the whole domain .this extension , again denoted by , belongs to and from lemma [ ext2 ] we have that for , from lemma [ burbuja ] and the above equality we obtain \\ & \leq c\left[\left(\vert e\vert_{1,\e}+\vert w_{h}-\pik w_{h}\vert_{1,\e}\right ) h_\e^{-1/2}\left\|j_{\ell}\right\|_{0,\ell}+h_\e^{-1}\left(\theta_{\e } + \vert e\vert_{1,\e}\right)h_\e^{1/2}\left\|j_{\ell}\right\|_{0,\ell}\right]\\ & \leq ch_\e^{-1/2}\left\|j_{\ell}\right\|_{0,\ell}\big(\vert e\vert_{1,\e}+\theta_{\e}\big),\end{aligned}\ ] ] where we have used again lemma [ burbuja ] together with estimate . multiplying by the above inequality allows us to conclude .secondly , for , we extend to as in the previous case . taking into account that in this case and is a quadratic bubble function in , from lemma [ ext2 ] we obtain then , repeating the previous arguments we obtain .\ ] ] hence , using lemma [ burbuja ] and a local trace inequality we arrive at \\ & \leq ch_\e^{-1/2}\|j_{\ell}\|_{0,\ell}\left(\theta_{\e}+|e|_{1,\e}+h_\e^{1/2}\|\l w -\l_h w_h\|_{0,\ell}\right),\end{aligned}\ ] ] where we have used lemma [ burbuja ] again . multiplying by the above inequalityyields .finally , for , we extend to as above again . taking into account that and is a quadratic bubble function in , from lemma [ ext2 ] we obtain then, proceeding analogously to the previous case we obtain .\ ] ] thus , the proof is complete .now , we are in a position to prove an upper bound for the local error indicators .[ eficiencia ] there exists such that ,\ ] ] .it follows immediately from lemmas [ eficiencia1][lema4 ] . according to the above theorem, the error indicators provide lower bounds of the error terms in the neighborhood of .for those elements with an edge on , the term also appears in the estimate .let us remark that it is reasonable to expect this terms to be asymptotically negligible .in fact , this is the case at least for the global estimator as is shown in the following result .there exists a constant such that from theorem [ eficiencia ] we have that the last term on the right hand side above is bounded as follows : where we have used that .now , by using a trace inequality and poincar inequality we have on the other hand , using the estimate , we have and we conclude the proof .in this section , we will investigate the behavior of an adaptive scheme driven by the error indicator in two numerical tests that differ in the shape of the computational domain and , hence , in the regularity of the exact solution . with this aim ,we have implemented in a matlab code a lowest - order vem ( ) on arbitrary polygonal meshes following the ideas proposed in . to complete the choice of the vem , we had to choose the bilinear forms satisfying . in this respect , we proceeded as in ( * ? ? ?* section 4.6 ) : for each polygon with vertices , we used in all our tests we have initiated the adaptive process with a coarse triangular mesh . in order to compare the performance of vem with that of a finite element method ( fem ) , we have used two different algorithms to refine the meshes .the first one is based on a classical fem strategy for which all the subsequent meshes consist of triangles .in such a case , for , vem reduces to fem .the other procedure to refine the meshes is described in .it consists of splitting each element into quadrilaterals ( being the number of edges of the polygon ) by connecting the barycenter of the element with the midpoint of each edge as shown in figure [ fig : cero ] ( see for more details ) .notice that although this process is initiated with a mesh of triangles , the successively created meshes will contain other kind of convex polygons , as can be seen in figures [ fig : uno ] and [ fig : cinco ] . since we have chosen , according to the definition of the local virtual element space ( cf .) , the term vanishes .thus , the error indicators reduce in this case to let us remark that in the case of triangular meshes , the term vanishes too , since and hence is the identity . by the same reason, the projection also disappears in the definition of .therefore , for triangular meshes , not only vem reduces to fem , but also the error indicator becomes the classical well - known edge - residual error estimator ( see ) : \!\!\right]_{\ell},\quad & \ell\in \ce_{\o } , \\[0.4 cm ] \l_h w_h-\dfrac{\partial w_{h}}{\partial{n } } , \quad & \ell\in\ce_{\g_0 } , \\[0.4 cm ] -\dfrac{\partial w_{h}}{\partial{n } } , \quad & \ell\in\ce_{\g_1}. \end{array}\right.\ ] ] in what follows , we report the results of a couple of tests . in both cases , we will restrict our attention to the approximation of the eigenvalues .let us recall that according to corollary [ cotalambda ] , the global error estimator provides an upper bound of the error of the computed eigenvalue .we have chosen for this test a problem with known analytical solution .it corresponds to the computation of the sloshing modes of a two - dimensional fluid contained in the domain with a horizontal free surface as shown in figure [ fig : slosh ] .the solutions of this problem are we have used the two refinement procedures ( vem and fem ) described above .both schemes are based on the strategy of refining those elements which satisfy figures [ fig : uno ] and [ fig : dos ] show the adaptively refined meshes obtained with vem and fem procedures , respectively .+ + since the eigenfunctions of this problem are smooth , according to we have that .therefore , in case of uniformly refined meshes , , where denotes the number of degrees of freedom which is the optimal convergence rate that can be attained .figure [ fig : tres ] shows the error curves for the computed lowest eigenvalue on uniformly refined meshes and adaptively refined meshes with fem and vem schemes .the plot also includes a line of slope , which correspond to the optimal convergence rate of the method .for uniformly refined meshes ( `` uniform fem '' ) , adaptively refined meshes with fem ( `` adaptive fem '' ) and adaptively refined meshes with vem ( `` adaptive vem'').,title="fig:",width=377,height=377 ] [ fig : tres ] it can be seen from figure [ fig : tres ] that the three refinement schemes lead to the correct convergence rate .moreover , the performance of adaptive vem is slightly better than that of adaptive fem , while this is also better than uniform fem .we report in table [ tabla : n2 ] , the errors and the estimators at each step of the adaptive vem scheme .we include in the table the terms which arise from the inconsistency of vem and which arise from the edge residuals .we also report in the table the effectivity indexes .components of the error estimator and effectivity indexes on the adaptively refined meshes with vem . [ cols="^,^,^,^,^,^,^",options="header " , ] [ tabla : n3 ]we have derived an a posteriori error indicator for the vem solution of the steklov eigenvalue problem .we have proved that it is efficient and reliable . for lowest order elements on triangular meshes , vem coincides with fem and the a posteriori error indicators also coincide with the classical ones. however vem allows using general polygonal meshes including hanging nodes , which is particularly interesting when designing an adaptive scheme .we have implemented such a scheme driven by the proposed error indicators .we have assessed its performance by means of a couple of tests which allow us to confirm that the adaptive scheme yields optimal order of convergence for regular as well as singular solutions .the authors warmfully thanks loureno beiro da veiga from universit di milano - bicocca , italy , by many helpful discussions on this subject .the authors also thank paola f. antonietti from politecnico di milano , italy , by allowing us to use her code for vem adaptive mesh refinement .the first author was partially supported by conicyt ( chile ) through fondecyt project no .1140791 and by diubb through project 151408 gi / vc , universidad del bo - bo , ( chile ) .the second author was partially supported by a conicyt ( chile ) fellowship .the third author was partially supported by basal project , cmm , universidad de chile ( chile ) .gain , c. talischi and g.h .paulino , _ on the virtual element method for three - dimensional linear elasticity problems on arbitrary polyhedral meshes _ , comput .methods appl .engrg . , 282 , ( 2014 ) , pp . | the paper deals with the a posteriori error analysis of a virtual element method for the steklov eigenvalue problem . the virtual element method has the advantage of using general polygonal meshes , which allows implementing very efficiently mesh refinement strategies . we introduce a residual type a posteriori error estimator and prove its reliability and efficiency . we use the corresponding error estimator to drive an adaptive scheme . finally , we report the results of a couple of numerical tests , that allow us to assess the performance of this approach . virtual element method , a posteriori error estimates , steklov eigenvalue problem , polygonal meshes 65n25 , 65n30 , 74s99 . |
consider the cauchy problem for a scalar conservation law in one space variable where , smooth ( by _ smooth _ we mean at least of class ) .it is well known that there exists a unique entropic solution satisfying in particular we can assume w.l.o.g . that for each , and uniformly bounded . the solutionto can be constructed in several ways , for example nonlinear semigroup theory , finite difference schemes , and in particular wavefront tracking and glimm scheme . in the scalar case ,in fact , the above functionals are lyapunov functionals , i.e. functionals decreasing in time , ( or _ potential _ , as they are usually called in the literature ) for all the schemes listed , yielding compactness of the approximated solution ( even in the multidimensional case ) .in it is shown that there exists an additional lyapunov functional ] , which will be called the _ set of waves_. in section [ s_front_glimm ] we construct for the glimm scheme a function \\ & & ( t , s ) & \mapsto & \mathtt x(t , s ) \end{array}\ ] ] with the following properties : 1 .the set is of the form with : define as the set 2 .the function is increasing , -lipschitz and linear in each interval if ; 3 . if , then for some , i.e. it takes values in the grid points at each time step ; 4 .for such that it holds 5 .there exists a time - independent function , the _ sign _ of the wave , such that for all .the last formula means that for all test functions it holds the fact that means that the wave has been removed from the solution by a cancellation occurring at time .formula and a fairly easy argument , based on the monotonicity properties of the riemann solver and used in the proof of lemma [ w_lemma_eow ] , yield that to each wave it is associated a unique value ( independent of ) by the formula } \mathcal s(s ) ds.\ ] ] we finally define the _ speed function _ \cup \{+\infty\} ] , ] and ] : the corresponding quantity is then given by } f(u^m),\ ] ] assuming again for definiteness ( see figure [ fig : figura19 ] ) . in this way , one can rewrite also for the glimm scheme . in the wavefront tracking.,width=302,height=188 ] in the glimm scheme , width=377,height=264 ] our choice to present two separate theorems is motivated by the following facts . first of all , due to the number of papers present in the literature concerning this estimate , we believe necessary to give a correct proof in the most simple case , i.e. wavefront tracking for scalar conservation laws .since however the estimate has been used mainly to prove the convergence rate of the glimm scheme , we felt necessary to offer also a direct proof for this approximation scheme .it turns out that even if the fundamental ideas are the same , the two proofs are sufficiently different in some points to justify a separate analysis . in our opinion , in fact , it is not trivial to deduce one from the other . as observed in , one of the main problems in obtaining an estimate of the form , is that the study of wave interactions can not be local in time , but one has to take into accoun the whole sequence of interactions - cancellations for every couple of waves .this is a striking difference with the glimm interaction potential , where the past history of the solution is irrelevant .previous attempts tried to adapt glimm s idea of finding a _ quadratic potential _ which is decreasing in time and at every interaction has a jump of the amount .apart technical variations , the idea is to transform the function of into : = \sum_{w \not= w ' } \frac{|\sigma(w ) -\sigma(w')| |w| |w'|}{|w| + |w'|}.\ ] ] for monotone initial data , this functional is sufficient ; however in f. ancona and a. marson show that defined above may be not bounded , so that in they consider the functional where is the position of the wave at time .notice that at the time of interaction of the wavefronts , one has ( there are no wavefronts between the two interacting ) , so that for the couple of waves , one has if the flux function has no finite inflection points , the waves can join in wavefronts ( because of an interaction ) and split again ( because of a cancellation ) an arbitrary large number of times .this implies that the functional ( as well as the other quadratic functionals introduced in the literature ) does not decay in time , but can increase due to cancellations .hence , instead of proving directly that the quadratic functional controls the interactions , in the authors consider a term which bounds the oscillations of for the waves not involved in an interaction , and prove that } \frac{|\sigma - \sigma'| s s'}{s + s ' } \leq ( q^\text{{am}}(t_1 ) + g(t_1 ) ) - \big ( q^\text{{am}}(t_2 ) + g(t_2 ) \big).\ ] ] since is quadratic by construction because of the lipschitz regularity of the speed ( inequality ) and , , they reduce the quadratic interaction estimate to the following estimate : our approach is slightly different : we construct a quadratic functional such that its total variation in time is bounded by , and at any interaction decays at least of the quantity ( or more precisely of the quantities in the l.h.s . of or concerning that interaction ) . the functional can increase due to cancellations , but in this case we show that its positive variation is controlled by the total variation of the solution times the amount of cancellation . being the total variation a lyapunov functional , it follows that so that , being , the functional has total variation of the order of . in particular , the estimates , concerning cancellations is much easier ( and already done in the literature , see ) , and we present it in propositions [ w_canc_3 ] , [ canc_3 ] , depending on the approximation scheme considered . in the case of cancellations , in fact , there is a first order functional decreasing , namely the total variation . for simplicity we present the sketch of the proof in the wavefront tracking casewe define again a functional of a form similar to , but with main differences . 1 .first of all its definition involves the waves , not the wavefronts .[ point_glimm ] if the waves , have not yet interacted , then the weight is a large constant . in our case, it suffices .+ if the waves have already interacted , then the weight has the form 3 . in the above formula, the quantity is the difference in speed given to the waves , by an artificial riemann problem , which roughly speaking collects all the previous common history of the waves , .this makes not local in time and space .the denominator of is not the total variation of between two wavefronts , ( containing , respectively ) as it is the case in , but only the total variation between the two waves , . observe that by the lipschitz regularity of the speed given to the waves by solving a riemann problem , it is fairly easy ( and proved in section [ w_functional_q ] ) that we observe first that the functional restricted to the couple of waves which have never interacted is exactly ( apart from the constant ) the original glimm functional : = \sum_{w \not= w ' } |w| |w'|.\ ] ] since the couple of waves which have never interacted is decreasing , this part of the functional is decreasing , and as observed before it is sufficient to control the quadratic estimates for couple of waves which have never interacted . the choice of the denominator in yields that our functional is not affected by the issue of large oscillations , as observed in , even if its form is similar to : indeed , in our case, the denominator we choose does not depend on the shock component which the waves belong to , and thus cancellations do not affect it .next , the riemann problem used to compute the quantity is made of all waves which have interacted with both waves and .we now show how it evolves with time .interaction : : if at an interaction occurs , and , are not involved in the interaction , the set of waves which have interacted with both is not changing ( since they are separated , at most one of them is involved in the interaction ! ) , which means that .if , are involved in the interaction , then the couple disappears from the sum , because when two wavefronts with the same sign interact a single wavefront comes out .cancellation : : if a cancellation occurs at , then one can check that again if both , are not involved in the wavefront collision then is constant .otherwise the change in corresponds to the change in speed obtained by removing some waves in a riemann problem , and adding all these variations one obtains that the oscillation of can be estimated explicitly as in the case of a single cancellation ( i.e. the total variation of the solution times the amount of cancellation ) . the cancellation case , corresponding to the positive total variation in time of , is thus controlled by and since and it follows that when an interaction between , occurs , the discussion above shows that we can split the waves involved in the interaction into sets : 1 . a set of waves in which have never interacted with the waves in ; 2 . a set of waves in which have interacted with a set of waves in ; 3 . a set of waves in which have never interacted with the waves in .the speed assigned by the artificial riemann problems for all couples of waves , yields that the decrease of at the interaction time is ( larger than the one ) given by the wave pattern depicted in figure [ fig : figura37 ] ..,width=415,height=226 ] by an explicit computation one can check that thus obtaining the desired estimate . the proof for the glimm scheme follows the same philosophy , but , as we said , due to the different structure of the approximating scheme it present different technical aspects .the paper is organized as follows .section [ s_convex_env ] provides some useful results on convex envelopes .part of these results are already present in the literature , others can be deduced with little effort .we decided to collect them for reader s convenience .two particular estimates play a key role in the main body of the paper : the regularity of the speed function ( theorem [ convex_fundamental_thm ] and proposition [ convex_fundamental_thm_affine ] ) and the behavior of the speed assigned to a wave by the solution to riemann problem ] ) , and moreover the map has to pass trough the grid points .this forces us to define the speed of a wave at time not by just taking the time - derivative of , but by considering the real speed given to by the riemann problem at each grid point as in .this analysis is done in section [ pswaves ] .another difference is that the glimm scheme , due to the choice of the sampling points , `` interacts '' with the solution ( i.e. it may alter the curve ) even when no real interaction - cancellation occurs .this is why we need to study an additional case , namely when no interaction / cancellation of waves occurs at a given grid point , and this is done in proposition [ p_no_interaction ] : the statement is that trivially nothing is going on in these points , but we felt the need of a proof .proposition [ canc_3 ] , namely the case of cancellation points , is analog to the wavefront tracking case . also the structure andproperties of the functional we are going to construct in the glimm scheme case are similar to the wavefront approximation analysis , the main difference being that several interactions and cancellations occur at each time step .we thus require that the set of pairs of waves present in the solution at time step ( i.e. not moved to ) , namely , can be split into two parts : one part concerns the interactions , and decreases of the right amount , the other one concerns cancellations and increases of a quantity controlled by the total variation of the solution times the amount of cancellation .the remain part of the section is the proof of these two estimates , from which one deduces theorem [ main_thm ] along the same line outlined in section [ sss_sketch_proof ] .first , we define the notion of waves which have already interacted , definition [ d_glimm_interacted ] , and waves which are separated ( or _ divided _ ) , definition [ waves_divided ] .notice that even if they occupy the same grid position for some time step , they are considered divided in the real solution if the riemann problem at that grid point assigns different speeds to them .the statement that the artificial riemann problems we consider separate waves as in the real solution is completely similar to the wavefront case , proposition [ unite_realta ] , but the proof is quite longer . in the last section , section [ ss_glimm_funct_q ]we define the functional , and due to the continuity of the set of waves we show a regularity property of the weight so that no measurability issues arise .the proof of the two estimates ( interactions and cancellations ) is somehow longer than in the wavefront tracking case but it is based on the same ideas , and this concludes the section . in appendix [ app_conter ] we present a counterexample to formula ( 4.84 ) in lemma 2 , pag .614 , of , which justifies the need of the analysis in the one - dimensional case . for usefulness of the reader ,we collect here some notations used in the subsequent sections .* , ; * ( resp . ) is the left ( resp .right ) derivative of at point ; * if is a sequence of real numbers , we write ( resp. ) if is increasing ( resp . decreasing ) and ; * given , we will denote equivalently by or by the -dimensional lebesgue measure of . *if \to { \mathbb{r}} ] are two functions which coincide in , we define the function \to { \mathbb{r}} ] denotes its positive part .in this section we define the convex envelope of a continuous function in an interval ] is varied : these estimates will play a major role for the study of the riemann problems .[ convex_fcn ] let be continuous and \subseteq { \mathbb{r}} ] _ as }f ( u ) : = \sup\bigg\{g(u ) \ \big| \g : [ a , b ] \to { \mathbb{r}}\text { is convex and } g \leq f\bigg\}.\ ] ] a similar definition holds for _ the concave envelope of in the interval ] .all the results we present here for the convex envelope of a continuous function hold , with the necessary changes , for its concave envelope . in the same setting of definition [ convex_fcn ] , }f ] for each ] and consider }f ] is an open interval ] .a _ maximal shock interval _ is a shock interval which is maximal with respect to set inclusion .notice that , if ] , then , by continuity of and }f ] .let be a shock interval for }f ] is affine on .the following theorem provides a description of the regularity of the convex envelope of a given function .[ convex_fundamental_thm ] let be a -function .then : 1 .[ convex_fundamental_thm_1 ] the convex envelope } f ] is differentiable on ] , then }f(u);\ ] ] 3 .[ convex_fundamental_thm_3 ] } f ] ' we mean that it is differentiable on in the classical sense and that in ( resp . ) the right ( resp .the left ) derivative exists .while the proof is elementary , we give it for completeness .( [ convex_fundamental_thm_1 ] ) and ( [ convex_fundamental_thm_2 ] ) .let } f ] for each . in a similar way one can prove that and thus ( [ convex_fundamental_thm_3 ] ) .let ] such that and , . indeed , if , then you can choose ; if , you choose , where is the maximal shock interval which belongs to ( recall that we already know that is differentiable ) . in a similar way you choose and it holds .hence }\ ] ] and so is lipschitz continuous with lipschitz constant less or equal than .a similar result holds for the piecewise affine interpolation of a smooth function .[ convex_fundamental_thm_affine ] let be fixed .assume .let be a smooth function and let be its piecewise affine interpolation with grid size .then the derivative } f_{\varepsilon} ] , which enjoys the following lipschtitz - like property : for any in such that f_{\varepsilon} ] , then }f = \operatorname{conv}_{[a,\bar u]}f \cup \operatorname{conv}_{[\bar u , b]}f.\ ] ] we have to prove that }f|_{[a , \bar u ] } = \operatorname{conv}_{[a , \bar u]}f\ ] ] and }f|_{[\bar u , b ] } = \operatorname{conv}_{[\bar u , b]}f.\ ] ] let }f|_{[a , \bar u]} ] .then there exists a function defined on ] and such that for some .then a direct verification yields that }f|_{[\bar u , b]} ] .hence , by definition of convex envelope , }f|_{[\bar u , b]})(\tilde u ) \leq \operatorname{conv}_{[a , b]}f(\tilde u ) = h(\tilde u )< g(\tilde u),\ ] ] a contradiction . in a similar way one can prove that }f|_{[\bar u , b ] } = \operatorname{conv}_{[\bar u , b]}f ] .then }f|_{[a , u_1 ] } = \operatorname{conv}_{[a , b]}f|_{[a , u_1]} ] .[ vel_aumenta ] let be continuous ; let . then 1 .}f\big)(u+ ) \geq \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u+) ] for each ] for each ; 4 .}f\big)(u- ) \leq \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u-) ] .we prove only the first point , the other ones being similar .let .if }f(\bar u ) = f(\bar u) ] and then we have done . otherwise , if }f(\bar u ) < f(\bar u) ] , such that .hence , by corollary [ rp_ridotto ] , }f|_{[a , u_1 ] } = \operatorname{conv}_{[a,\bar u]}f|_{[a , u_1]} ] , then } f < \operatorname{conv}_{[a , b]}f ] is affine on ] , , }f\big)(u_2- ) - \big(&\frac{d}{du}\operatorname{conv}_{[a,\bar{u}]}f\big)(u_1- ) \\\geq&~ \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u_2- ) - \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u_1-);\end{aligned}\ ] ] 3 . for each , , }f\big)(u_2 + ) -\big(&\frac{d}{du}\operatorname{conv}_{[\bar{u},b]}f\big)(u_1 + ) \\\geq&~ \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u_2 + ) - \big(\frac{d}{du}\operatorname{conv}_{[a , b]}f\big)(u_1+);\end{aligned}\ ] ] 4 . for each ] , .if belong to the same shock interval of }f ] .[ incastro ] let be smooth ; let be real numbers .assume that 1 .}f(u_2 ) = f(u_2) ] . then }f(u_2 ) = f(u_2) ] , \to { \mathbb{r}} ] , then } ] , by corollary [ rp_ridotto ] , }f(u_2 ) = f(u_2) ] is convex ; moreover , by definition , and , for this reason , }f ] such that } f(u_k ) = f(u_k).\ ] ] if , then } f(\bar u ) = f(\bar u).\ ] ] by simplicity , assume for each , the general case being entirely similar .by contradiction , assume } f(\bar u ) < f(\bar u) ] is a convex function whose graph contains points . [ diff_vel_proporzionale_canc ] let be a smooth function , let .then } f\bigg)(\bar u- ) - \bigg(\frac{d}{du } \operatorname{conv}_{[a , b]}f\bigg)(\bar u ) \leq \lvert f '' \lvert_{l^\infty(a , b ) } ( b - \bar u).\ ] ] moreover , if is the piecewise affine interpolation of with grid size , it holds } f_{\varepsilon}\bigg)(\bar u - ) - \bigg(\frac{d}{du } \operatorname{conv}_{[a , b]}f_{\varepsilon}\bigg)(\bar u- ) \leq \lvert f '' \lvert_{l^\infty(a , b ) } ( b - \bar u).\ ] ] let us first prove the inequality for smooth . set }f ] .let ] .it holds and moreover define clearly and thus hence now observe that there must be ] , then by theorem [ convex_fundamental_thm ] , point [ convex_fundamental_thm_2 ] , for each ; passing to the limit one obtains .otherwise , if such a sequence does not exist , then one can easily find such that is a maximal shock interval .this means that for some .moreover , since there must be such that .hence , from , we obtain concerning the piecewise affine case , substituting with one obtains the same inequality .now observe that as in the smooth case one can find such that .moreover it is also easy to see that there must be ] is strictly increasing and bijective ; 2 . if , then is strictly decreasing and bijective ; 3 . if , then .given an enumeration of waves as in definition [ w_eow ] , we define the _ sign of a wave _ with finite position ( i.e. such that ) as follows : .\ ] ] we immediately present an example of enumeration of wave which will be fundamental in the sequel .[ w_initial_eow ] fix and let be the approximate initial datum of the cauchy problem , with compact support and taking values in .the total variation of is an integer multiple of .let , \quad x \mapsto u(x ) : = { \text{\rm tot.var.}}(\bar u_{\varepsilon } ; ( -\infty , x]),\ ] ] be the total variation function .then define : and , \quad s \mapsto \mathtt x_0(s ) : = \inf \big\{x \in ( -\infty , + \infty ] \| \ { \varepsilon}s \leq u(x ) \big\}. \ ] ] moreover , recalling , we define .\ ] ] it is fairly easy to verify that , are well defined and that they provide an enumeration of waves , in the sense of definition [ w_eow ] .let us now give another definition .[ w_speed_function ] consider a function as in definition [ w_eow ] and let be an enumeration of waves for . the _ speed function _ \cup \{+\infty\} ] ( or ) .the speed for ] ( resp . ) , the 3-tuple is an enumeration of waves for the piecewise constant function .we prove separately that the properties ( 1 - 3 ) of definition [ w_eow ] are satisfied ._ proof of property ( 1 ) ._ by definition of wavefront solution , takes values only in the set of discontinuity points of ._ proof of property ( 2 ) ._ let be two waves and assume that . by contradiction , suppose that .since by the inductive assumption at time , the 3-tuple is an enumeration of waves for the function , it holds .two cases arise : * if , then it must hold , but this is impossible , due to remark [ w_speed_increasing_wrt_waves ] . *if , then lines and must intersect at some time , but this is impossible , by definition of wavefront solution and times ._ proof of property ( 3 ) . _ for or and for discontinuity points , the third property of an enumeration of waves is straightforward .so let us check the third property only for time and for the discontinuity point .fix any time ; according to assumption on binary intersections , you can find two points such that for any such that , either or and moreover , , .we now just consider two main cases : the other ones can be treated similarly .recall that at time , the -tuple is an enumeration of waves for the piecewise constant function . if , then \cap { \mathbb{z}}{\varepsilon}\ ] ] and \cap { \mathbb{z}}{\varepsilon}\ ] ] are strictly increasing and bijective ; observing that in this case , one gets the thesis .if , then \cap { \mathbb{z}}{\varepsilon}\ ] ] is strictly increasing and bijective ; observing that in this case \big\},\ ] ] one gets the thesis . for fixed wave ,functions are right - continuous .moreover is piecewise constant .finally we introduce the following notation . given a time and a position ] ( resp . ) is an interval in .assume is positive , the other case being similar .first we prove that restricted to is increasing .let , with .let be the discontinuity points of between and .by definition of ` interval of waves ' and by the fact that each wave in is positive , for any , contains only positive waves .thus , by definition [ w_eow ] of enumeration of waves , and by the fact that for each , , the restriction \cap { \mathbb{z}}{\varepsilon}\ ] ] is strictly increasing and bijective , and so ; hence is strictly increasing . in order to prove that ] ( resp . ) is an interval in .hence , we will also write instead of and call it the speed given to the waves by the riemann problem . moreover , we will also say that the riemann problem divides if the riemann problem does .now we state the main result for the wavefront tracking approximation , namely theorem [ w_main_thm ] . for easiness of the readerwe repeat the statement below . as in the previous section ,let be an -wavefront solution of the cauchy problem ; consider the enumeration of waves and the related position function and speed function constructed in previous section .fix a wave and consider the function . by constructionit is finite valued until the time , after which its value becomes ; moreover it is piecewise constant , right continuous , with jumps possibly located at times .the results we are going to prove is the following holds : where is _ the strength of the wave . we recall the following definition . [ w_int_canc_points ] for each , we will say that is an _ interaction point _ if the wavefronts which collide in have the same sign .an interaction point will be called _ positive _ ( resp . _negative _ ) if all the waves located in it are positive ( resp .negative ) .moreover we will say that is a _ cancellation point _ if the wavefronts which collide in have opposite sign .the first step in order to prove theorem [ w_main_thm ] is to reduce the quantity we want to estimate , namely to a two separate estimates , according to being an interaction or a cancellation : the estimate on the cancellation points is fairly easy .first of all define for each cancellation point the _ amount of cancellation _ as follows : [ w_canc_3 ] let be a cancellation point .then let be respectively the left and the right state of the left wavefront involved in the collision at point and let be respectively the left and the right state of the right wavefront involved in the collision at point , so that and . without loss of generality , assume . then we have } f_{\varepsilon}\bigg ) \big ( ( \hat u(s ) -{\varepsilon } , \hat u(s))\big ) - \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_m ] } f_{\varepsilon}\bigg)\big ( ( \hat u(s)-{\varepsilon } , \hat u(s))\big ) \bigg]|s| \\ & \overset{\text{(prop .\ref{differenza_vel})}}{\leq } \sum_{s \in { \mathcal{w}}(t_j ) } \bigg [ \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_r ] } f_{\varepsilon}\bigg)(u_r- ) - \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_m ] } f_{\varepsilon}\bigg)(u_r- ) \bigg]|s| \\ & \quad \ \ = \bigg [ \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_r ] } f_{\varepsilon}\bigg)(u_r- ) - \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_m ] } f_{\varepsilon}\bigg)(u_r- ) \bigg ] \sum_{s \in { \mathcal{w}}(t_j ) } |s| \\ & \quad \ \\bigg [ \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_r ] } f_{\varepsilon}\bigg)(u_r- ) - \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_m ] } f_{\varepsilon}\bigg)(u_r- ) \bigg ] { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) .\end{split}\ ] ] now observe that , by proposition [ diff_vel_proporzionale_canc ] , } f_{\varepsilon}\bigg)(u_r- ) - \bigg(\frac{d}{du } \operatorname{conv}_{[u_l , u_m ] } f_{\varepsilon}\bigg)(u_r- ) \leq&~ \lvert f '' \lvert_{l^\infty } ( u_m - u_r ) \\\leq&~ \lvert f '' \lvert_{l^\infty } \mathcal{c}(t_j , x_j ) .\end{split}\ ] ] hence , from and , we obtain together with , this concludes the proof . [ w_canc_4 ] it holds from , and proposition [ w_canc_3 ] we obtain \\ & \leq \lvert f '' \lvert_{l^\infty } { \text{\rm tot.var.}}(u(0,\cdot ) ) \big [ { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) - { \text{\rm tot.var.}}(u_{\varepsilon}(t_j , \cdot ) ) \big ] \\ & \leq \lvert f '' \lvert_{l^\infty } { \text{\rm tot.var.}}(u(0,\cdot))^2 , \end{split}\ ] ] thus concluding the proof of the corollary . from now on ,our aim is to prove that as outlined in section [ sss_sketch_proof ] , the idea is the following : we define a positive valued functional , , such that is piecewise constant in time , right continuous , with jumps possibly located at times and such that such a functional will have two properties : 1 . for each that is an interaction point , is decreasing at time and its decrease bounds the quantity we want to estimate at time as follows : ;\ ] ] this is proved in theorem [ w_decreasing ] ; 2 . for each that is a cancellation point , can increase at most by ;\ ] ] this is proved in theorem [ w_increasing ] . using the two estimates above, we obtain the following proposition , which completes the proof of theorem [ w_main_thm ] .[ w_thm_interaction ] it holds by direct computation , \\ \leq & ~ 2 \bigg [ \sum_{\substack{(t_j , x_j ) \\ \text{interaction } } } \big [ { \mathfrak{q}}(t_{j-1 } ) - { \mathfrak{q}}(t_j ) \big ] + \sum_{\substack{(t_j , x_j ) \\ \text{cancellation } } } \big [ { \mathfrak{q}}(t_{j-1 } ) - { \mathfrak{q}}(t_j ) \big ] \\ &\qquad \qquad - \sum_{\substack{(t_j , x_j ) \\ \text{cancellation } } } \big [ { \mathfrak{q}}(t_{j-1 } ) - { \mathfrak{q}}(t_j ) \big ] \bigg ] \\= & ~ 2 \bigg [ \sum_{j=1}^j \big [ { \mathfrak{q}}(t_{j-1 } ) - { \mathfrak{q}}(t_j ) \big ] + \sum_{\substack{(t_j , x_j ) \\ \text{cancellation } } } \big [ { \mathfrak{q}}(t_j ) - { \mathfrak{q}}(t_{j-1 } ) \big ] \bigg ] \\\text{(by \eqref{w_increase } ) } \leq & ~ 2 \bigg [ \sum_{j=1}^j \big [ { \mathfrak{q}}(t_{j-1 } ) - { \mathfrak{q}}(t_j ) \big ] + \log(2 ) \sum_{\substack{(t_j , x_j ) \\ \text{cancellation } } } \lvert f '' \lvert_{l^\infty } { \text{\rm tot.var.}}(u(0,\cdot ) ) \mathcal{c}(t_j , x_j ) \bigg ] \\ \text{(by \eqref{w_canc_0 } , \eqref{boundqzero } ) } \leq & ~ 2 \big [ { \mathfrak{q}}(0 ) + \log(2 ) \lvert f '' \lvert_{l^\infty } { \text{\rm tot.var.}}(u(0,\cdot))^2 \big ] \\\leq&~ 2 ( 1 + \log(2 ) ) \lvert f '' \lvert_{l^\infty } { \text{\rm tot.var.}}(u(0,\cdot))^2 . \end{aligned}\end{gathered}\ ] ] in the remaining part of this section we prove estimates and . in this sectionwe define the notion of pairs of waves which _ have never interacted before a fixed time _ and pairs of waves which _ have already interacted _ and , for any pair of waves which have already interacted , we associate an artificial speed difference , which is some sense summarize their past common history . [ interagite_non_interagite ] let be a fixed time and let .we say that _ interact at time _ if .we also say that _ they have already interacted at time _ if there is such that interact at time .moreover we say that _ they have not yet interacted at time _ if for any , they do not interact at time .[ w_interagite_stesso_segno ] assume that the waves interact at time . then they have the same sign .easy consequence of definition of enumeration of waves and the fact that is independent of .[ w_quelle_in_mezzo_hanno_int ] let be a fixed time , , .assume that have already interacted at time .if and , then have already interacted at time .let be the time such that interact at time .clearly .since for fixed , is increasing on , it holds .let . by lemmas [ w_interagite_stesso_segno ] and [ w_quelle_in_mezzo_hanno_int ] , the set is an homogeneous interval of waves .moreover set let be two waves .assume that and that they have already interacted at a fixed time .consider now the set this is clearly an interval of waves .observe that and .a fairly easy argument based on lemma [ w_quelle_in_mezzo_hanno_int ] implies that is made of the waves which have interacted with both and .[ w_waves_divided ] let be two waves which have already interacted at time .we say that _ are divided in the real solution at time _ if i.e. if at time they have either different position , or the same position but different speed .if they are not divided in the real solution , we say that _ they are joined in the real solution_. [ rem_divise_solo_in_cancellazioni ] it for each , then two waves are divided in the real solution if and only if they have different position .the requirement to have different speed is needed at collision times , more precisely at cancellations .[ w_unite_realta ] let be a fixed time .let . if are not divided in the real solution at time , then the riemann problem does not divide them . for the definition of the riemann problem see definition [ w_artificial_speed ] and remark [ w_artificial_speed_remark ] .let .clearly .observe that is an interval of waves and that by definition the real speed of the waves is the speed given by the riemann problem .the conclusion is then a consequence of corollary [ stesso_shock ] .the remaining part of this section is devoted to prove the following proposition , which is in some sense the converse of the previous one and is a key tool in order estimate the increase and the decrease of the functional . [ w_divise_tocca ] let be a fixed time .let be two waves which have already interacted at time .assume that are divided in the real solution .let .if are divided in the real solution at time , then the riemann problem divides them .fix two waves .it is sufficient to prove the proposition only for times .we proceed by induction on . for proof is obvious .let us assume the lemma is true for and let us prove it for .suppose to be divided in the real solution at time .we can also assume w.l.o.g . that are both positive .when is an interaction the analysis is quite simple , while the cancellation case requires more effort .* interaction .* let us distinguish two cases .if , then waves must be involved in the interaction , i.e. . since is an interaction point , , and so are not divided at time , hence the statement of the proposition can not apply to . if , take , such that are divided in the real solution .since an interaction does not divide waves which were joined before the interaction , were already divided at time and so by inductive assumption we have done . * cancellation . *assume that are divided in the real solution after the cancellation at time .moreover , w.l.o.g ., assume that in two wavefronts collide , the one coming from the left is positive , the one coming from the right is negative ; assume also that waves in are positive , and that ( the proof in the case is similar , but easier ) .set , .if , then were already divided before the collision ( i.e. at time ) , and and so by inductive assumption we conclude .hence assume ] , i.e. no wave in ] divides them .hence , observing that , by proposition [ incastro ] , the riemann problem ] .now we are able to conclude the proof of our proposition .take and assume that are divided in the real solution at time . 1 .if ] , then by claim [ w_riemann_dopo ] are divided by the riemann problem .if ] divides and then by claim [ w_riemann_dopo ] and proposition [ tocca ] , the riemann problem divides them .3 . finally assume that ] divides them , and also the riemann problem \subseteq [ r_1,r_2] ] .[ w_rem_unite_realta ] if are joined in the real solution , then by proposition [ w_unite_realta ] .finally set ( recall that is the strength of the waves respectively . )it is immediate to see that is positive , piecewise constant , right continuous , with jumps possibly located at times , and . in the next two sections we prove that it also satisfies inequality andthis completes the proof of proposition [ w_thm_interaction ] .this section is devoted to prove inequality .[ w_decreasing ] for any interaction point , it holds .\ ] ] by direct inspection of the proof one can verify that the constant is sharp .assume w.l.o.g .that all the waves in are positive .we partition through the equivalence relation by our assumption is decomposed into two disjoint intervals of waves such that for each and , it holds .first of all observe that by formula and by proposition [ w_unite_realta ] , if , , but , then . indeed , if at least one between does not belong to , then ; on the other side , if ( or ) , then are joined both before and after the interaction and for this reason , by remark [ w_rem_unite_realta ] , . nowobserve that , if , then , by remark [ w_rem_unite_realta ] , .hence it is sufficient to prove that for any positive interval of waves , define _ the strength of the interval _ as and _ the mean speed of waves in _ as now observe that hence set , and define ( see fig .[ fig : figura14 ] ) .,width=453,height=283 ] thus dividing by we get where the last inequality is a consequence of lagrange s theorem .let us now concentrate our attention on the second term of the last summation .observe that waves are divided in the real solution at time ; hence , by proposition [ w_divise_tocca ] , they are divided by the riemann problem .hence it is not difficult to see ( it is the cubic estimate when the speeds are monotone ) that we can write , for any and , if have already interacted at time , then . together with proposition [ vel_aumenta ] and with the fact that are divided by the riemann problem ,this yields and thus instead , if have not yet interacted at time , using lagrange s theorem , we get thus , by , , , let us now observe that if and , then by definition of , have not yet interacted at time and so . the same holds if and . hence , recalling and , we get \\ \leq&~ 2 \bigg [ \sum_{(s , s ' ) \in \mathcal{l}_1 \times \mathcal{r } } \mathfrak{q}(t_{j-1},s , s')|s||s'| + \sum_{(s , s ' ) \in \mathcal{l}_2 \times\mathcal{r}_1}\mathfrak{q}(t_{j-1},s , s')|s||s'| \\ & \quad \ + \sum_{(s , s ' ) \in \mathcal{l}_2 \times \mathcal{r}_2}\mathfrak{q}(t_{j-1},s , s')|s||s'| \bigg ] \\ \leq&~ 2 \sum_{(s , s ' ) \in \mathcal{l } \times \mathcal{r } } \mathfrak{q}(t_{j-1},s , s')|s||s'| , \end{split}\ ] ] which is what we wanted to obtain. this section is devoted to prove inequality , more precisely we will prove the following theorem . [ w_increasing ] if is a cancellation point , then ,\ ] ] where is the amount of cancellation at point , defined in . to simplify the notations , w.l.o.gassume the following wave structure ( see also the proof of proposition [ w_divise_tocca ] and figure [ fig : figura03 ] ) : * in two wavefronts collide , the one coming from the left is positive and contains all and only the waves in \subseteq { \mathcal{w}}(t_{j-1}) ] survive ; * .it holds |s||s'| \\\leq&~ \sum_{\substack{s , s ' \in { \mathcal{w}}(t_j ) \\ s< s ' } } [ \mathfrak{q}(t_j , s , s')-\mathfrak{q}(t_{j-1},s , s')]^+ \ |s||s'| .\end{split}\ ] ] the proof will follow by the next three lemmas .[ l_w_canc_lem_1 ] if are waves such that ^+ > 0 ] and ] .by contradiction , assume ] , then ; hence , by remark [ w_rem_unite_realta ] , are divided in the real solution at time . since ] and because is a cancellation point , it must hold .assume now , the case being similar .by proposition [ w_divise_tocca ] the riemann problems and divide waves and . moreover observe that .hence , by proposition [ tocca ] , in a similar way , ; thus , contradicting the assumption .let us now prove the second part of the statement , namely ] , piecewise affine , with discontinuity points of the derivative in the set and such that } & = \operatorname{conv}_{[b_1,b_3 ] } f , & h|_{[b_1,b_2 ] } & = \operatorname{conv}_{[b_1,b_2 ] } f , & g & = h \ \text { on } ( -\infty , b_1].\end{aligned}\ ] ] clearly , with the above properties exist .[ w_lemma_uno ] for any ] , , it holds ^+ \\\leq&~ \frac { \bigg| \big [ h ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - h'\big((\hat u(s)-{\varepsilon } , \hat u(s))\big ) \big ] - \big [ g'\big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - g'\big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \big ] \bigg| } { |\hat u(s ' ) - ( \hat u(s)-{\varepsilon})|}. \end{split}\ ] ] in other words , we are saying that the maximal variation of is controlled by the maximal variation of speed ( the numerator of the r.h.s .of ) divided by the total variation between the two waves , .fix as in the hypothesis and set for shortness by multiplication for , becomes ^+ \leq&~ \bigg| \bigg(h'\big ( \big(\hat u(s')-{\varepsilon } , \hat u(s')\big)\big ) - h'\big(\big(\hat u(s)-{\varepsilon } , \hat u(s)\big)\big)\bigg ) \\ & ~ - \bigg(g'\big(\big(\hat u(s')-{\varepsilon } , \hat u(s')\big)\big ) - g'\big(\big(\hat u(s)-{\varepsilon } , \hat u(s)\big)\big)\bigg ) \bigg|.\end{aligned}\ ] ] we consider two cases depending on the position of the wave .assume first ] ) .now distinguish two subcases : 1 . if , then = \mathcal{i}_j \cap [ r_1,r_2] ] .in this case since are divided at time in the real solution , we can argue as in the previous case and use proposition [ w_divise_tocca ] to obtain observe that since are joined before the collision in the real solution , moreover by proposition [ w_unite_realta ] , , which , together with , yields the thesis .[ w_lemma_due ] define .then \\ s ' \in [ r_1,r_2 ] , \ s< s ' } } \frac { \big|\varphi ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - \varphi ' \big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \big| |s||s'| } { |\hat u(s ' ) - ( \hat u(s)-{\varepsilon})| } \\ \leq \log(2 ) { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) \big [ h'(b_2- ) - g'(b_2-)\big].\end{gathered}\ ] ] first of all , let us observe that , since is increasing and with positive sign , we can forget about the absolute value and get \\ s ' \in [ r_1,r_2 ] , \s < s ' } } \frac { \bigg [ \varphi ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - \varphi ' \big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \bigg ] |s||s'| } { \hat u(s ' ) - ( \hat u(s)-{\varepsilon } ) } \\ & \quad \leq \sum_{\substack{s \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s ' \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\s < s ' } } \frac { \bigg[\varphi ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - \varphi ' \big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \bigg ] |s||s'| } { \hat u(s ' ) - ( \hat u(s)-{\varepsilon } ) } \\ & \quad = \sum_{\substack{s \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s ' \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\s < s ' } } \frac { \bigg [ \varphi ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - \varphi ' \big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \bigg ] } { \hat u(s ' ) - ( \hat u(s)-{\varepsilon } ) } \int_{\hat u(s)-{\varepsilon}}^{\hat u(s ) } du \int_{\hat u(s')-{\varepsilon}}^{\hat u(s ' ) } du ' . \\ \ ] ] since for each , and similarly for , we can continue the chain of inequality in the following way : \\ s ' \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s < s ' } } & \frac { \bigg[\varphi ' \big ( ( \hat u(s')-{\varepsilon } , \hat u(s ' ) ) \big ) - \varphi ' \big ( ( \hat u(s)-{\varepsilon } , \hat u(s ) ) \big ) \bigg ] } { \hat u(s ' ) - ( \hat u(s)-{\varepsilon } ) } \int_{\hat u(s)-{\varepsilon}}^{\hat u(s ) } du \int_{\hat u(s')-{\varepsilon}}^{\hat u(s ' ) } du ' \\ = & ~ \sum_{\substack{s \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s ' \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\s < s ' } } \frac{1}{\hat u(s ' ) - ( \hat u(s)-{\varepsilon } ) } \int_{\hat u(s)-{\varepsilon}}^{\hat u(s ) } \int_{\hat u(s')-{\varepsilon}}^{\hat u(s ' ) } ( \varphi'(u ' ) - \varphi'(u ) ) du'du \\\leq&~ \sum_{\substack{s \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s ' \in [ l(t_{j-1},r_2 + 1 ) , r_2 ] \\ s < s ' } } \int_{\hat u(s)-{\varepsilon}}^{\hat u(s ) } \int_{\hat u(s')-{\varepsilon}}^{\hat u(s ' ) } \frac{\varphi'(u ' ) - \varphi'(u)}{u'-u } du'du \\\leq&~ \int_{a}^{b_2 } \int_{u}^{b_2 } \frac{\varphi'(u ' ) - \varphi'(u)}{u'-u } du'du . \\ \ ] ] since is piecewise affine , we can write 'du \\ = & ~ \sum_{\substack{m \in { \mathbb{z}}\\ a < m{\varepsilon } < b_2 } } [ \varphi'(m{\varepsilon}+ ) - \varphi'(m{\varepsilon}- ) ] \int_{a}^{m{\varepsilon } } \int_{m{\varepsilon}}^{b_2 } \frac{1}{u'-u } du'du .\\ \end{aligned}\ ] ] an elementary computation shows that } \int_{a}^{\xi } \int_{\xi}^{b_2 } \frac{1}{u'-u } du'du = \log(2 ) ( b_2-a).\ ] ] hence \int_{a}^{m{\varepsilon } } \int_{m{\varepsilon}}^{b_2 } \frac{1}{u'-u } du'du \leq&~ \log(2 ) ( b_2 - a ) \sum_{\substack{m \in { \mathbb{z}}\\ a < m{\varepsilon } < b_2 } } [ \varphi'(m{\varepsilon}+ ) - \varphi'(m{\varepsilon}- ) ] \\\leq&~ \log(2 ) { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) \big [ \varphi'(b_2- ) - \varphi'(a+ ) \big ] \\ \text{(since by proposition \ref{vel_aumenta } ) } \ \leq&~ \log(2 ) { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) \varphi'(b_2- ) \\\leq&~ \log(2 ) { \text{\rm tot.var.}}(u_{\varepsilon}(0,\cdot ) ) \big [ h'(b_2- ) - g'(b_2- ) \big ] . \ ] ] this concludes the proof of lemma [ w_lemma_due ] . * conclusion of proof of theorem [ w_increasing]*. observe that , by proposition [ diff_vel_proporzionale_canc ] , putting together lemma [ w_lemma_uno ] , lemma [ w_lemma_due ] , inequality and inequality one easily concludes the proof of the theorem .in this section we prove the main interaction estimate for an approximate solution of the cauchy problem obtained by glimm scheme .the line of the proof is very similar to the proof for the wavefront tracking case , even if a relevant number of technicalities arises .for this reason the structure of this section is equal to the structure of section [ sect_wavefront ] : throughout the remaining part of this paper , we will emphasize the differences in definitions and proofs between the wavefront algorithm and the glimm scheme .first let us briefly recall how glimm scheme constructs an approximate solution .fix . to construct an approximate solution to the cauchy problem, we start with a grid in the plane having step size , with nodes at the points moreover we shall need a sequence of real numbers , uniformly distributed over the interval ] , the percentage of points , which fall inside ] ; moreover it holds : 1 .if , then ] .then define : \ ] ] and , \qquad \mathtt x_0(s ) : = \inf\big\{x \in ( -\infty , + \infty ] \ | \s \leq u(x ) \big\}. \ ] ] moreover , recalling , we define .\ ] ] one can easily prove that is continuous .we also adapt definition [ w_speed_function ] in the following way .[ speed_function ] consider a function as in definition [ eow ] and let be an enumeration of waves for .the _ speed function _ \cup \{+\infty\} ] such that the 3-tuple is an enumeration of waves for the piecewise constant function .for we set , where is the position function in the enumeration of waves of example [ initial_eow ] .clearly is an enumeration of waves for the function .assume now to have obtained with the property stated above and define in the following way .let be the speed function related to the piecewise constant function and the enumeration of waves .then set define where was defined in , with respect to the enumeration of waves for the initial datum , and set observe that for each , is uniquely defined since , for , sets , , are disjoint .a fairly easy extension of lemma [ w_lemma_eow ] shows that the recursive procedure generates an enumeration of waves at the times .the only differences in the proof are that now the map takes continuous values and that at each time you have to consider countably many disjoint riemann problems , instead of a single one . we have just defined the position function on the domain .let us now extend it over all the domain , by setting , for , see figure [ fig : figura32 ] .the speed \cup \{+\infty\} ] , then have already interacted at time ( lemma [ w_quelle_in_mezzo_hanno_int ] ) . as in section [ w_waves_collision ] , for any fixed time and for any ,define by lemmas [ w_interagite_stesso_segno ] and [ w_quelle_in_mezzo_hanno_int ] , this is an homogeneous interval of waves .if have already interacted at a fixed time , set as before this is clearly an interval of waves and so by proposition [ interval_waves ] the image of is an interval in .the following definition is the same as definition [ w_waves_divided ] .[ waves_divided ] let be two waves which have already interacted at time .we say that _ are divided in the real solution at time _ if i.e. if at time they have either different position , or the same position , but different speed .if they are not divided in the real solution , we say that _ they are joined in the real solution_. as noted in remark [ rem_divise_solo_in_cancellazioni ] , in the wavefront tracking algorithm two waves can have same position but different speed only in cancellation points ; on the contrary , in the glimm scheme two waves can have same position but different speed at every time step .the analog of proposition [ w_unite_realta ] is the following proposition .we omit the proof .[ unite_realta ] let be a fixed time .let . if are not divided in the real solution at time , then the riemann problem does not divide them .also proposition [ w_divise_tocca ] holds in this framework , but the proof is slightly different and more technical .[ divise_tocca ] let be a fixed time .let be two waves which have already interacted at time .assume that are divided in the real solution , and let .if are divided in the real solution at time , then the riemann problem divides them .the proof is by induction on the time step . for ,the statement is obvious .let us assume the proposition is true for and let us prove it for .let be two waves which have already interacted at time and assume to be divided in the real solution at time .we can also assume w.l.o.g . that are both positive .let us define it is easy to see that exist and .assume first , which means that the and the above coincide .see figure [ fig : figura34 ] ..,width=415,height=226 ] in this case this implies and so the thesis is easily proved , because the artificial riemann problem coincides with the real one .we can thus assume , i.e. .see figure [ fig : figura35 ] . under this assumption , we first write down some useful claims . .,width=491,height=264 ] [ bounds_on_i ] it holds .\ ] ] let . by and , , for some . hence , .[ segno ] each wave in has positive sign .use definition [ eow ] of enumeration of waves and lemma [ w_interagite_stesso_segno ] .[ divise_dopo ] let .assume are divided at time , but not divided at time .then a. either and waves in are negative ; b. or and waves in are negative .since are not divided at time , there is such that , for some .assume , the other case is completely similar .if or , this means that is either a no - collision point or an interaction point . by proposition [ differenza_vel ] , since are not divided at time , they can not be divided at time . hence and .thus by claim [ segno ] , ] ; on the other hand , by claim [ bounds_on_i ] , ] , then set . by claim [ divise_dopo ] , property [ a ] ) holds . by lemma[ bdd ] , also property [ b ] ) holds ; 2 . similarly ,if ] , set ; 3 .if ] , then set , and take as any wave such that as in the previous point , one can show that properties [ a ] ) and [ b ] ) holds .now , by [ a ] ) , [ b ] ) and lemma [ ultimo ] one gets }f = \operatorname{conv}_{[\tilde l(n ) , u_{n+1,m_1 } ] } f \cup \operatorname{conv}_{[u_{n+1,m_1},u_{n+1,m_2 } ] } f \cup \operatorname{conv}_{[u_{n+1,m_2 } , \tilde r(n ) ] } f.\ ] ] [ lemma_riemann_dopo_sep ]it holds } f = \operatorname{conv}_{i_1 } f \cup \operatorname{conv}_{i_2 } f \cup \operatorname{conv}_{i_3 } f,\ ] ] where , \qquad i_2 : = [ u_{n+1,m_1},u_{n+1,m_2 } ] , \qquad i_3 : = [ u_{n+1,m_2 } , r(n+1)].\end{aligned}\ ] ] we consider four cases .if at least one among belongs to ] and ] , then by our definition , and is any wave such that . observe that in this case the riemann problem ] is similar to previous point ._ conclusion of the proof of proposition [ divise_tocca ] ._ take , divided at time .we have to prove that the riemann problem ] , then by claim [ divise_dopo ] are divided at time and so by inductive assumption the riemann problem ] divides them and so by , also the riemann problem ] , the case being similar .we know are divided at time .this means that the riemann problem ] divides .hence , by , the riemann problem ] .finally set first of all we have to prove that is well defined , i.e. is integrable .actually in the following proposition we prove an additional regularity property on .[ w_partitioned ] it is possible to write as an ( at most ) countable union of mutually disjoint interval of waves , such that for each and for each , .since derivation of convex envelopes is a borel operation , it follows that is borel . for each ,define we claim that each is an interval of waves and for fixed , the cardinality of is at most countable .this is proved using the following lemmas .[ l_eh_1 ] is an interval of waves at time .let and let such that .we have to prove that .we have by contradiction , assume , i.e. .hence , there is assume . hence .thus , whatever the position of is , since , must have already interacted with , a contradiction . on the other hand , if , whatever the position of is , must have already interacted either with or with .thus , a contradiction .[ l_open ] let .there exist such that 1 . ] , .the proof is by induction on the time step . for , assume ; it is sufficient to choose : = { \mathcal{w}}(0,m{\varepsilon}) ] to be the interval at time with the properties 1 , 2 of the statement of the lemma .for time define : = ( s_1 , s_2 ] \cap { \mathcal{w}}^{(0)}((n+1){\varepsilon } , m{\varepsilon}).\ ] ] clearly property ( 1 ) holds .moreover , if property ( 2 ) does not hold , there must be a wave ] .assume ( otherwise trivial ) .take any sequence in such that , and . in a similar way ,take another sequence in such that , and . for sufficiently large , and so , by proposition [ divise_tocca ] , divided by the riemann problem ; this means that there is some point such that finally , using proposition [ convex_approximation ] , one completes the proof .we now complete the proof of the theorem .the steps are the same as in proof of theorem [ w_decreasing ] ._ by definition of , for any , are not divided in the real solution at time ; then , by proposition [ unite_realta ] , they are not divided by riemann problem ; hence , ._ step 2 and 3 ._ as in theorem [ w_decreasing ] , \cdot \frac{(u_m - u_l)(u_r - u_m)}{u_r - u_l } \\ = & ~ 2 \cdot \frac{|\sigma_m(\mathcal{l } ) - \sigma_m(\mathcal{r})|}{|\mathcal{l}| + |\mathcal{r}| } \ |\mathcal{l}| |\mathcal{r}| \\\leq & ~ 2 \bigg [ \frac{|\sigma_m(\mathcal{l}_1 ) - \sigma_m(\mathcal{r})|}{|\mathcal{l}| + |\mathcal{r}| } \ |\mathcal{l}_1| |\mathcal{r}| + \frac{|\sigma_m(\mathcal{l}_2 ) - \sigma_m(\mathcal{r}_1)|}{|\mathcal{l}| + |\mathcal{r}| }\ |\mathcal{l}_2| |\mathcal{r}_1| \\ & ~ + \frac{|\sigma_m(\mathcal{l}_2 ) - \sigma_m(\mathcal{r}_2)|}{|\mathcal{l}| + |\mathcal{r}| } \ |\mathcal{l}_2| |\mathcal{r}_2| \bigg]\\ \leq & ~ 2\bigg [ \lvert f '' \lvert_{l^\infty } |\mathcal{l}_1| |\mathcal{r}| + \frac{|\sigma_m(\mathcal{l}_2 ) - \sigma_m(\mathcal{r}_1)|}{|\mathcal{l}| + |\mathcal{r}| } \|\mathcal{l}_2| |\mathcal{r}_1| + \lvert f '' \lvert_{l^\infty } |\mathcal{l}_2| |\mathcal{r}_2| \bigg ] .\end{split}\ ] ] _ step 4 ._ let us now concentrate our attention on the second term of the last summation . as a consequence of lemma [ um_tocca ] , we get }f(u ' ) - \frac{d}{du}\operatorname{conv}_{[u_1,u_2]}f(u )\bigg]dudu ' \bigg | \\ = & ~ \bigg | \int_{\mathcal{l}_2 } \int_{\mathcal{r}_1 } \bigg[\frac{d}{du}\operatorname{conv}_{[u_1,u_2]}f(\hat u(s ' ) ) - \frac{d}{du}\operatorname{conv}_{[u_1,u_2]}f(\hat u(s ) ) \bigg]\ , dsds ' \bigg | . \end{split}\ ] ] we observe that , by definition of , for any and , if have already interacted at time , then ] be two function with the following properties : } & = \operatorname{conv}_{[a_1,a_3]}f , & h_{|[a_2,a_3 ] } & = \operatorname{conv}_{[a_2,a_3]}f , \\g_{|[b_3,b_1 ] } & = \operatorname{conv}_{[b_3,b_1]}f , & h_{|[b_3,b_2 ] } & = \operatorname{conv}_{[b_3,b_2]}f , \\\end{aligned}\ ] ] and set } = h_{[a_3,b_3]}.\ ] ] see figure [ fig : figura31 ] .it is easy to see that some with the above properties exist . and.,width=453,height=283 ] [ lemma_uno ] for any , ^+ \leq \frac{\big| \big ( h'(\hat u(s ' ) ) - h'(\hat u(s ) ) \big ) - \big ( g'(\hat u(s ' ) ) - g'(\hat u(s ) ) \big ) \big|}{|\hat u(s')- \hat u(s)|}.\ ] ] if have not yet interacted at time , then the l.h.s . of is equal to zero. thus we can assume have already interacted at time . in this case set for simplicity the proof reduces to prove the following inequality : ^+ \leq \big| \big ( h'(\hat u(s ' ) ) - h'(\hat u(s ) ) \big ) - \big ( g'(\hat u(s ' ) ) - g'(\hat u(s ) ) \big ) \big|.\ ] ] if are not divided in the real solution at time , then , by proposition [ unite_realta ] , ; in this case the l.h.s . of is equal to zero .let us then assume are divided at time .we consider four main cases ._ case 1 . _assume ] , because in ] on ] .in this case since are divided at time in the real solution , we can argue as in the previous case and use proposition [ divise_tocca ] to obtain } f = \operatorname{conv}_{[b_3,b_2 ] } f = h \hspace{1 cm } \text{on } [ b_3,b_2].\end{aligned}\ ] ] hence now distinguish two possibilities : 1 . : in this case belong to the same wavefront interval of } = \operatorname{conv}_{[b_3,b_1]}f ] and ] , and so } f(\hat u(s ' ) ) \\= & ~ \frac{d}{du}\operatorname{conv}_{\hat u(\mathcal{i}_{n } ) \cap [ b_3,b_1 ] } f(\hat u(s ' ) ) \overset{\eqref{b1}}{= } \frac{d}{du}\operatorname{conv}_{\hat u(\mathcal{i}_{n } ) } f(\hat u(s ' ) ) = \sigma_n(s ' ) .\end{aligned}\ ] ] 2 .if , then and so } f(\hat u(s ' ) ) \\ = & ~ \frac{d}{du}\operatorname{conv}_{[b_3,b_2 ] } f(\hat u(s ' ) ) = h(\hat u(s ' ) ) , \end{aligned}\ ] ] while } f(\hat u(s ' ) ) \\\geq&~ \frac{d}{du}\operatorname{conv}_{[b_3,b_1 ] } f(\hat u(s ' ) ) = g(\hat u(s ' ) ) , \end{aligned}\ ] ] where the inequality is due to proposition [ vel_aumenta ] . in both cases ( 1 ) and ( 2 ) , .together with , this yields the thesis .assume now ] .as in the previous point , and so the thesis follows .the proof of the remaining cases ] and ] is similar to case ( 3 ) and case ( 2 ) respectively .[ lemma_due ] we have .\end{aligned}\ ] ] the proof is the analog of lemma [ w_lemma_due ] . in fact \xi \bigg|du'du \\ & \leq \int_{a_2}^{b_2 } \int_{u}^{b_2 } \frac{1}{u'- u } \int_u^{u ' } |h''(\xi ) - g''(\xi)|d\xi du'du \\ & \leq \int_{a_2}^{b_2 } \big|h''(\xi ) - g''(\xi)\big| \bigg(\int_{a_2}^\xi \int_\xi^{b_2 } \frac{1}{u'- u } du'du \bigg ) d\xi . \end{aligned}\end{gathered}\ ] ] hence , by , .\end{aligned}\end{gathered}\ ] ] now remember that and coincide on ] such that on \cup [ b_3 , \bar b] ] and \times { \text{\rm tot.var.}}(u) ] of strength traveling with speed a rarefaction ] traveling with speed . since every wavefront starting from and connecting to a point has positive speed ,then after a time all the waves have collided into a single wavefront with a speed now consider a contact discontinuities with size ] has formed , and it is fairly easy to see that this point corresponds to a splitting .hence this wave configuration corresponds to a tree .we now estimate the amount of interaction calculated as the decrease of the cubic functional of : since it is bounded by the area , then we can write . \end{split}\ ] ] the amount of cancellation is .for the couple of wavefronts , formula ( 4.84 ) of bounds the variation of speed of between and the splitting time by \\ = & ~ { \mathcal{o}(1)}\big [ \alpha ( l+{\varepsilon})^3 + { \varepsilon}\big].\end{aligned}\ ] ] we have used . on the other hand , an explicit computation gives it is clear that since we can choose and , then we can not bound the difference in speed with the amount of interaction and cancellation . | we prove a quadratic interaction estimate for approximate solutions to scalar conservation laws obtained by the wavefront tracking approximation or the glimm scheme . this quadratic estimate has been used in the literature to prove the convergence rate of the glimm scheme . the proof is based on the introduction of a quadratic functional , decreasing at every interaction , and such that its total variation in time is bounded . differently from other interaction potentials present in the literature , the form of this functional is the natural extension of the original glimm functional , and coincides with it in the genuinely nonlinear case . |
-5 mm the nervous system creates many different rhythms , each associated with a range of behaviors and cognitive states .the rhythms were first discovered from scalp recordings of humans , and the names by which they are known still come mainly from the electroencephalograph ( eeg ) literature , which pays attention to the frequency and behavioral context of those rhythms , but not to their mechanistic origins .the rhythmic patterns include the alpha ( 9 - 11 hz ) , beta ( 12 - 30 hz ) , gamma ( 30 - 80 hz ) , theta ( 4 - 8 hz ) , delta ( 2 - 4 ) hz and slow wave ( .5 - 2 hz ) rhythms .the boundaries of these ranges are rough .more will be said about the circumstances in which some of these rhythms are displayed .it is now possible to get far more information about the mechanisms behind the dynamics of the nervous system from other techniques , including electrophysiology .the revolutions in experimental techniques , data acquisition and analysis , and fast computation have opened up a broad and deep avenue for mathematical analysis .the general question addressed by those interested in rhythms is : how does the brain make use of these rhythms in sensory processing , sensory - motor coordination and cognition ?the mathematical strategy , to be discussed below , is to investigate the `` dynamical structure '' of the different rhythms to get clues to function .most of this talk is about dynamical structure , and the mathematical issues surrounding its investigation .i ll return at the end to issues of function .-5 mm the mathematical framework for the study of brain dynamics are the hodgkin - huxley ( hh ) equations .these are partial differential equations describing the propagation of action potentials in neurons ( cells of the nervous system ) .the equations , which play the same role in neural dynamics that navier - stokes does in fluid dynamics , are an elaborate analogy to a distributed electrical circuit . the central equation, describes conservation of current across a piece of a cell membrane ; is the cross - membrane voltage and the left hand side is the capacitive current .the first sum on the right hand side represents the intrinsically generated ionic currents across the membrane .the term represents the spatial diffusion , and the current fed into the cell . represents the currents introduced by coupling from other cells .thus , these equations can also be used to model networks of interacting neurons , the focus of this talk .each of the intrinsic currents is described by ohm s law : is electromotive force divided by resistance . in this context, one usually uses the concept of `` conductance '' , which is the reciprocal of resistance .the electromotive force depends on the type of charged ion ( e.g. , na , k , ca , cl ) and the voltage of the cell ; it has the form ] , where is the maximal conductance , and and are gating variables satisfying the above equation for .the dynamics for and differ because is an increasing function of , while is a decreasing function ; also is much smaller than .the ( chemical ) synaptic currents have the same form as the intrinsic ones , with the difference that the dependence of the driving force on voltage uses that of the post - synaptic cell , while the conductance depends on the pre - synaptic voltage .that is , has the form ,$ ] where the equation for x above , with replaced by , the voltage of the cell sending the signal , and is the reversal potential of the synapse . the coupling is said to be excitatory if the current is inward ( increases voltage toward the threshold for firing an action potential ) or inhibitory if the current is outward ( moves voltage away from threshold for firing . ) for a simple version of the hh equations , there are three ionic currents ; one of these ( na ) creates an inward current leading to an action potential , one ( k ) an outward current helping to end the action potential , and a leak current ( mainly cl ) with no gating variable .the hh equations are not one single set of equations , but a general ( and generalizable ) form for a family of equations , corresponding to different sets of intrinsic currents ( which can depend on position on the neuron ) , different neuron geometries , and different networks created by interactions of neurons , which may themselves be highly inhomogeneous .numerical computation has become highly important for observing the behavior of these equations , but does not suffice to understand the behavior , especially to get insight into what the specific ionic currents contribute ; this is where the analysis , including simplification , comes in .for an introduction to hh equations , some analysis and some of its uses in models , see [ 1 ] .-5 mm it is not possible to analyze the full class of equations in all generality .our strategy is to look for mathematical structures underlying some classes of behavior observed experimentally ; the emphasis is on the role of dynamical systems , as opposed to statistics , though probabilistic ideas enter the analysis .our central scientific question here is how rhythms emerge from the `` wetware '' , as modeled by the hh equations .as we will see , different rhythms can be based on different sets of intrinsic currents , different classes of neurons , and different ways of hooking up those cells .there are some behaviors we can see by looking at small networks , and others that do not appear until the networks are large and somewhat heterogeneous . even in the small networks ,there are a multiplicity of different building blocks for the rhythms , with excitation and inhibition playing different roles .noise appears , and plays different roles from heterogeneity .investigators often use simplifications of the hh equations .for example , this talk deals only with `` space clamped '' cells in which the spatial distribution of each cell is ignored , and the equations become odes .( there are circumstances under which this can be a bad approximation , as in [ 2 ] ) . under some circumstances ,the 4-dimensional simplest space - clamped hh equations ( one current equation , three gating variables ) can be reduced to a one - dimensional equation ; thus , networks of neurons can be described by a fraction of the equations that one needs for the full hh network equations .another kind of reduction replaces the full hh odes by maps that follow the times of the spikes . in both cases ,there are at least heuristic explanations for why these reductions are often very successful , and hints about how and why the simplifications can be expected to break down .-5 mm -5 mm some kinds of cells coupled by inhibition like to form rhythms and synchronize [ 3 - 5 ] .this is unintuitive , because inhibition to cells can temporarily keep the latter from firing ( see below for important exceptions ) , but mutual inhibition can encourage cells to fire simultaneously .there are various ways to see this , with methods that are valid in different contexts . for weak coupling, it can be shown rigorously that the full equations reduce to interactions between phases of the oscillators [ 6 ] ; the particular coupling associated with inhibition can then be shown to be synchronizing ( though over many cycles ) [ 7 ] .if the equations can be reduced to one - dimensional `` integrate and fire '' models , one can use `` spike - response methods '' to see the synchronizing effect of inhibitory synapses on timing of spikes . both of these are described in [ 6 ] along with more references .another method , which i believe is most intuitive , looks at the ongoing effect of forced inhibition on the voltage of the cells , and how some of the processes are `` slaved '' to others .this is seen most clearly in the context of another one - dimensional reduction that has become known as the `` theta '' model , because of the symbols used for the phase of the oscillations [ 8 ] .the reduced equations have been shown to be a canonical reduction of equations that are near a saddle - node bifurcation on an invariant circle ( limit cycle ) .many versions of hh - like models ( and some kinds of real neurons ) have this property for parameter values near onset of periodic spiking , and they are known as `` type 1 '' neurons .the `` theta model '' has the form here the equation for the phase has periodic solutions if the parameter is positive , and two fixed points ( stable , saddle ) if is negative .to understand the effects of forced inhibition , we replace by a time dependent inhibition given by , where for and zero otherwise . with the change of variables , this is a 2-d autonomous system .figures [ 9 ] and analysis show that the system has two special orbits , known in the non - standard analysis literature as `` rivers '' [ 10 ] , and that almost all of the trajectories feed quickly into one of these , and are repelled from the other .the essential effect is that initial conditions become irrelevant to the outcome of the trajectories . a similar effect works for mutually coupled systems of inhibitory neurons .the rhythm formed in this way is highly dependent on the time scale of decay of the inhibition for the frequency of the network [ 11 , 12 ] .these models , and the `` fast - firing '' inhibitory cells that they represent , can display a large range of frequencies depending on the bias ( in hh , the parameter in the theta model ) ; however , in the presence of a small amount of heterogeneity in parameters , the rhythm falls apart unless the frequency is in the gamma range ( 30 - 80 hz ) [ 4 , 13 ] .this can be understood from spike response methods or in terms of rivers .the above rhythm is known as ing or interneuron gamma [ 14 , 15 ] . a variation on this uses networks with fast - firing inhibitory cells ( interneurons or i cells ) and excitatory cells ( pyramidal cells or e - cells ) .this is called ping ( pyramidal interneuron gamma ) [ 14 , 15 ] .heuristically , it is easy to understand the rhythm : the inhibitory cells are set so they do not fire without input from the e - cells . when the e - cells fire , they cause the i - cells to cross firing threshold and inhibit the e - cells , which fire again when the inhibition wears off . this simple mechanism becomes much more subtle when there is heterogeneity and noise in large networks , which will be discussed later .-5 mm the fast - firing cells described above are modeled using only the ionic currents needed to create a spike .most other neurons have channels to express many other ionic currents as well , with channel kinetics that range over a large span of time constants .these different currents change the dynamical behavior of the cells , and allow such cells to be `` type ii '' , which means that the onset of rhythmic spiking as bias is changed is accompanied by a hopf bifurcation instead of a saddle node .the type of onset has important consequences for the ability of pair of such cells to synchronize .e.g. , models of the fast - firing neuron , if connected by excitatory synapses , do not synchronize , as can be shown from weak coupling or other methods described above ( e.g. , [ 7 ] ). however , if the cells are type ii , they do synchronize stably with excitation ( and not with inhibition ) .this was shown by gutkin and ermentrout using weak coupling methods [ 16 ] .a more specific case study was done by acker et al . [17 ] , motivated by neurons in the part of the cortex that constitutes the input - output pathways to the hippocampus , a structure of the brain important to learning and recall .these cells are excitatory and of type ii ( j.white , in prep . ) ; models of these cells , based on knowledge of the currents that they express , do synchronize with excitatory synapses , and do not with inhibitory synapses . the synchronization properties of the such cells can be understood from spike - timing functions and maps [ 17 ] . given the hh equations for the cell , one can introduce at any time in the cycle excitation or inhibition whose time course is similar to what the synapse would provide . from this , one can compute how much the next spike is advanced or delayed by this synapse .from such a graph , one can compute a spike - time map which takes the difference in spike times in a single cycle to the difference in the next cycle .the analysis of such a map is easy , but the process raises deeper mathematical issues .one set of issues concerns what is happening at the biophysical level that gives rise to the type ii bifurcation , which is associated with a particular shape of the spike advance function [ 18 ] .analysis shows that the type ii is associated with slow outward currents or certain slow inward currents that ( paradoxically ) turn on when the cell is inhibited [ 16 , 17 ] ; this shows how biophysical structure is connected with mathematical structure . a second set of questions concerns why the high - dimensional coupled hh equations can be well approximated by a 1-d map .( in some parameter ranges , but not all , this is any excellent approximation ) .the mathematical issues here concern how large subsets of high - dimensional phase space collapse onto what is essentially a one - dimensional space .ideas similar to those in section 3.1 are relevant , but with different biophysics creating the collapse of the trajectories . in this case( and others ) there are many different ionic currents , with many different time scales , so that a given current can be dominant in some portion of the trajectory and then decrease to zero while others take over ; this leads to structure that is more complex than that of the traditional `` fast - slow '' equations , and which is not nearly as understood . such reductions to 1-d mapshave been used in other investigations of synchrony [ 19 - 21 ] involving multiple cells and multiple kinds of currents .-5 mm so far , i ve talked about networks containing fast - firing neurons ( inhibitory ) or excitatory cells .but there are many different kinds of cells in the nervous system , with intrinsic and synaptic currents that make them dynamically very different from one another .once there are more currents with more time scales , it is easier to create more rhythms with different frequency .that is , the differences in frequencies often ( but not always ) come from some time scales in the interacting currents , and can not be scaled away .the stellate cell of section 3.2 is an excellent example of currents creating frequencies ; in a wide range of parameters , these cells , even without coupling , form a theta rhythm .indeed , they are believed to be one of the primary sources of that rhythm in the hippocampus , which is thought by many to use these rhythms in tasks involving learning and recall .as described above , these cells are excitatory , and synchronize when coupled by excitation .more puzzling are inhibitory cells in the hippocampus that are capable of forming theta rhythms as isolated cells with ionic currents similar to those in the stellate cells .the puzzle is that these cells do not cohere ( in models ) using inhibitory coupling .( the decay time of inhibition caused by these cells is roughly four times longer than the inhibition caused by the fast - firing cells , but neither fast nor this slower decaying inhibition creates synchrony in models . ) so what is providing the coherence seen in the theta rhythm ? ( the rhythm can be seen in small slices that do not have inputs from other parts of the brain producing theta , so in such a paradigm , the rhythm must be produced locally . )one suggestion ( rotstein , kopell , whittington , in preparation ) is that the inhibitory rhythms seen in slice preparations with excitation blocked pharmacologically depend on both kinds of inhibitory cells discussed , the special ones ( called o - lm cells [ 22 ] ) and the others .simulations show that networks of these cells can have the o - lm cells synchronize and i - cells synchronize at a different phase , to create an inhibitory network with considerably more complexity than interacting fast - firing cells involved in ing . again, this can be reduced to a low - dimensional map for a minimal network ( two o - lm cells , one fast firing i - cell ) .however , the reduction now requires properties of the currents involved in the o - lm model , including the kinetics of the gating variables .-5 mm another set of mathematical issues is associated with transitions among rhythms . in general ,rhythms slower than gamma ( e.g. , beta , theta and alpha ) make use of ionic currents that are active between spikes .these currents are voltage - dependent , so that changes in voltage , in the sub- and super - threshold regimes , can turn on or off these currents .thus , neuromodulators that change the voltage range of a neuron ( e.g. , by changing a leak current ) can change which other currents are actively expressed . in that way, they can cause a switch from one rhythm to another .for example , models of the alpha rhythm [ 20 ] suggest that this rhythm makes use the inhibition - activated `` h - current '' ; this current is effectively off line if the voltage is increased ( even below threshold level ) .thus , a switch from alpha to a faster rhythm ( gamma or beta ) can be effected by simply making the e - cells operate in a moderately higher voltage regime . these switches can be seen in simulations ( pinto , jones , kaper , kopell , in prep . ) , but are still understood only heuristically .the mathematical issues are associated with reduction of dimension methods . in the regimein which the network is displaying alpha , there are many more variables that are actively changing , notably the gating variables of each of the currents that is important in this rhythm .when there is a switch to gamma and those currents go off line , the phase space becomes effectively smaller .the mathematics here involves understanding how that phase compression takes place . a related set of mathematical questions concerns rhythms that are `` nested '' , one within another .for example , the theta rhythm often presents as the envelope of a series of faster gamma cycles , and the beta rhythm , at least in some manifestations , occur with the i - cells firing at a gamma rhythm and the e - cells firing at the slower beta rhythm , missing some cycles of the inhibitory rhythm .the gamma / beta switch has been understood from a physiological point of view ( see [ 19 ] and its references ) and has been simulated .the gamma / theta nesting is less understood , though new data and simulations are providing the physiological and heuristic basis for this [ 22 ; rotstein , kopell , whittington , in prep . ] .-5 mm though there are many more examples of other building blocks , i m turning to issues that do not appear in small network analysis .i m going to go back to a very simple building block , but now put many such together .the simple building block is one e cell , one i - cell , which together can create a gamma rhythm .-5 mm we now consider a network with n e - cells and m i - cells , with random coupling from the e - cells to the i - cells and vica versa .suppose , for example , there is a fixed probability of connection in each direction between any pair of e and i cells .then the number of inputs to any cell is distributed across the population , leading to heterogeneity of excitation and inhibition .is it still possible to get coherent gamma rhythms ?this can be answered with mathematical analysis using the `` theta neuron '' model described above [ 9 ] . to understand synchrony in e / i networks , it is helpful to understand what each pulse of inhibition does to the population of excitatory cells and vica versa .the part in which both probability and dynamical systems play a large role is the effect of a pulse of inhibition on a population .the `` rivers '' referred to above in section 3.1 create synchronization if the inputs to cells have no variance , but with variation in the size of the inputs , there is a spread in the times of the outputs .this can be accurately computed using features of the dynamics and probability theory .similarly , but with less accuracy , one can compute the the effect of variation of inputs on the spike times of the receiving population due to a pulse of excitation .the results lead to unintutive conclusions , e.g. , that increasing the strength of the inhibition ( which strengthens the synchronizing effect of the rivers ) does not reduce the desynchronizing effects of random connectivity .furthermore , tight synchrony can be obtained even with extremely sparse coupling provided that variance in the size of the inputs is small .-5 mm the above analyses can be put together to understand synchrony of `` ping '' . however , they leave only partially answered many questions about larger networks .one such question , which is central to understanding how the assemblies of neurons are created and destroyed , is the circumstances under which the synchrony falls apart , i.e. , what modulations of cells and/or synapses will lead to loss of coherence of the gamma rhythm .the above analysis shows that too large a variation in size of inputs to different cells of the same population can be fatal .similar phenomena occur with too much variation in drive or intrinsic currents .there are less obvious constraints that are understood from working with smaller networks described above . from those ,it is possible to see that ing and ping operate in different parameter regimes : the firing times of the population in ing are governed by the bias of the i - cells ( as well as the decay time of the inhibition ) ; in ping , the inhibitory cells are more passive until driven by the e - cells , and the timing comes from bias of the e - cells ( as well as decay of inhibition ) .this means that the mechanism of coherence can switch between ing and ping by changing relative excitability of the two populations .changing the strengths of the i - e and e - i synapses can also get the population ( large or small ) out of the regime in which the e - cells synchronize the i - cells , and vica versa .a more mysterious issue that can not be addressed within minimal networks is how the size of the sub - populations responding on a given cycle affects the coherence on the next cycle and the numbers of cells participating , especially when there is some heterogeneity in the network .e.g. , as the number of inhibitory neurons firing in a cycle changes , it changes the total inhibition to the e - cells , which changes the number of e - cells that are ready to fire when inhibition wears off , and before the next bout of inhibition .if the amount of inhibition gets too small , or inhibition gets too dispersed , the coherence can rapidly die . without taking into account the trajectories of each of the large number of cells , it is likely that some possibly probabilistic account of the numbers of cells spiking per cycle can give some insight into the dynamical mechanisms surrounding failure of coherence .such a reduction has been successfully used in a different setting , involving the long - distance coherence of two populations of heterogeneous cells . in this case , if the populations are each minimal ( one e / i pair ) for each site , there is 1-d map that describes the synchronization , with the variable the timing between the e and i sites [ 19 ] . for large and heterogeneous networks , the synchronization ( within some parameter regimes ) can be described by a 3-dimensional map , in which the first variable is the time between the first spikes of a cycle in the two e - cell populations , and the others are the fraction of i - cells firing on that cycle in each of the two i - cell populations ( mcmillen and kopell , in prep . ) .related work has been done from a different perspective , starting with asynchronous networks and asking how the asynchrony can lose stability [ 23 - 25 ] .work using multiple time scales to address the formation of `` clusters '' when synchrony fails is in [ 26 ] .-5 mm one of the main differences between ing and ping is the difference in robustness .small amounts of heterogeneity of any kind make ing coherence fall apart dramatically [ 4,13 ] . by contrast , ping is tolerant to large ranges of heterogeneity .the `` ping - pong '' mechanism of ping is also able to produce frequencies that cover a much wider range than the ing mechanism , which is constrained by loss of coherence to lie in the gamma range of approximately 30 - 80 hz [ 4,13 ] .since many versions of gamma seen in experiments are of the ping variety , this raises the question of what constrains the ping rhythms to stay in the gamma frequency range .a possible answer to this comes from simulations .c. borgers , d. mcmillen and i found that heterogeneity , unless extreme , would not disrupt the ping coherence .however , a very small amount of noise ( with fixed amplitude and poisson - distributed times ) could entirely destroy coherence of the ping , provided the latter had a frequency below approximately 30 hz ; if the same noise is introduced when the network is in the gamma range , the behavior is only slightly perturbed .furthermore , the ability to withstand the noise is related to adding some i - i connections , as in ing .a heuristic explanation is that , at low frequencies , the inhibition to the i - cells ( which has a time constant around 10ms ) wears off before the excitation from the e - cells causes these cells to spike .thus , those cells hang around the threshold for significant amounts of time , and are therefore vulnerable to being pushed over threshold by noise .the mathematics has yet to be understood rigorously .-5 mm the mathematical questions are themselves interesting , but the full richness of the scientific endeavor comes from the potential for understanding how the rhythms generated by the brain might be used in sensory processing , motor coordination and cognition .we are still at the outer edges of such an investigation , but there are many clues from animal behavior , physiology and mathematics .work done with eegs ( see , e.g. , reviews [ 27,28 ] ) has shown that many cognitive and motor tasks are associated with specific rhythms appearing in different parts of the tasks .gamma is often associated with attention , awareness and perception , beta with preparation for motor activies and high - order cognitive tasks , theta with learning and recall and alpha with quiet awareness ( there are several different versions of alpha in different parts of the brain and found in different circumstances ) .work done in whole animals and in slice preparations are giving clues to the underlying physiology of the rhythms , and how various neuromodulators change the rhythms , e.g. , [ 14 ] .much of the math done so far has concerned how the networks produce their rhythms from their ionic currents and connectivity , and has not directly addressed function .however , the issues of function are starting to be addressed in terms of how the dynamics of networks affects the computational properties of the latter .one of the potential functions for these rhythms is the creation of `` cell assemblies '' , temporary sets of neurons that fire synchronously .these assemblies are believed to be important in distributed processing ; they enhance the effect of the synchronized pulses downstream , and provide a substrate for changes in synapses that help to encode experience .( `` cells that fire together wire together . '' ) simulations show , and help to explain , why gamma rhythms have especially good properties for creating cells assemblies , and repressing cells with lower excitatbility or input [ 29 ] .furthermore , the changes in synapses known to occur during gamma can facilitate the creation of the beta rhythm ( see [ 19 ] for references ) , which appears in higher - order processing .mathematical analysis shows that the beta rhythm is more effective for creating synchrony over distances where the conduction time is longer .thus , we can understand the spontaneous gamma - beta switch seen in various circumstances ( see [ 19 ] and [ 29 ] ) as creating cell assemblies ( during the gamma portion ) , using the synaptic changes to get cell assemblies encoded in the beta rhythm , and then using the beta rhythm to form highly distributed cell assemblies .the new flood of data , plus the new insights from the mathematics , are opening up many avenues for mathematical research related to rhythms and function . a large class of such questions concerns how networks that are displaying given rhythms filter inputs with spatio - temporal structure , and how this affects the changing cell assemblies .this question is closely related to the central and controversial questions of what is the neural code and how does it operate .these questions will likely require new techniques to combine dynamical systems and probability , new ways to reduce huge networks to ones amenable to analysis , and new ideas within dynamical systems itself , e.g. , to understand switches as global bifurcations ; these are large and exciting challenges to the mathematical community .n. kopell and g.b .ermentrout , mechanisms of phase - locking and frequency control in pairs of coupled neural oscillators , in _handbook on dynamical systems _ ,_ toward applications _ .b. fiedler , elsevier ( 2002 ) , 354 .whittington , r.d .traub , n. kopell , g.b .ermentrout and e.h .buhl , inhibition - based rhythms : experimental and mathematical observation on network dynamics , _ int .j. of psychophysiology _ * 38 * ( 2000 ) , 315336 . m.j .gillies , r.d .traub , f.e.n lebeau , c.h .davies , t. gloveli , e.h .buhl and m.a .whittington , stratum oriens interneurons temporally coordinate atropine - resistant theta oscillations in hippocampal area ca1 , to appear in _j. neurophysiol . | the nervous system displays a variety of rhythms in both waking and sleep . these rhythms have been closely associated with different behavioral and cognitive states , but it is still unknown how the nervous system makes use of these rhythms to perform functionally important tasks . to address those questions , it is first useful to understood in a mechanistic way the origin of the rhythms , their interactions , the signals which create the transitions among rhythms , and the ways in which rhythms filter the signals to a network of neurons . this talk discusses how dynamical systems have been used to investigate the origin , properties and interactions of rhythms in the nervous system . it focuses on how the underlying physiology of the cells and synapses of the networks shape the dynamics of the network in different contexts , allowing the variety of dynamical behaviors to be displayed by the same network . the work is presented using a series of related case studies on different rhythms . these case studies are chosen to highlight mathematical issues , and suggest further mathematical work to be done . the topics include : different roles of excitation and inhibition in creating synchronous assemblies of cells , different kinds of building blocks for neural oscillations , and transitions among rhythms . the mathematical issues include reduction of large networks to low dimensional maps , role of noise , global bifurcations , use of probabilistic formulations . 4.5 mm * 2000 mathematics subject classification : * 37n25 , 92c20 . |
relativistic flows are involved in many of the high - energy astrophysical phenomena , such as , for example , jets in extragalactic radio sources , accretion flows around compact objects , pulsar winds and ray bursts . in many instances the presence of a magnetic field is also an essential ingredient for explaining the physics of these objects and interpreting their observational appearance .theoretical understanding of relativistic phenomena is subdue to the solution of the relativistic magnetohydrodynamics ( rmhd ) equations which , owing to their high degree of nonlinearity , can hardly be solved by analytical methods .for this reason , the modeling of such phenomena has prompted the search for efficient and accurate numerical formulations . in this respect ,godunov - type schemes have gained increasing popularity due to their ability and robustness in accurately describing sharp flow discontinuities such as shocks or tangential waves .one of the fundamental ingredient of such schemes is the exact or approximate solution to the riemann problem , i.e. , the decay between two constant states separated by a discontinuity .unfortunately the use of an exact riemann solver is prohibitive because of the huge computational cost related to the high degree of nonlinearities present in the equations . instead, approximate methods of solution are preferred .linearized solvers rely on the rather convoluted eigenvector decomposition of the underlying equations and may be prone to numerical pathologies leading to negative density or pressures inside the solution . characteristic - free algorithms based on the rusanov lax - friedrichs or the harten - lax - van leer ( hll , * ? ? ?* ) formulations are sometime preferred due to their ease of implementation and positivity properties .implementation of such algorithms can be found in the codes described by .although simpler , the hll scheme approximates only two out of the seven waves by collapsing the full structure of the riemann fan into a single average state . these solvers , therefore , are not able to resolve intermediate waves such as alfvn , contact and slow discontinuities .attempts to restore the middle contact ( or entropy ) wave ( hllc , initially devised for the euler equations by * ? ? ?* ) have been proposed by in the case of purely transversal fields and by ( mb from now on ) , in the more general case .these schemes provide a relativistic extension of the work proposed by and for the classical mhd equations .hllc solvers for the equations of mhd and rmhd , however , still do not capture slow discontinuities and alfvn waves . besides, direct application of the hllc solver of mb to genuinely 3d problems was shown to suffer from a ( potential ) pathological singularity when the component of magnetic field normal to a zone interface approaches zero .a step forward in resolving intermediate wave structures was then performed by ( mk from now on ) who , in the context of newtonian mhd , introduced a four state solver ( hlld ) restoring the rotational ( alfvn ) discontinuities . in this paperwe propose a generalization of miyoshi & kusano approach to the equations of relativistic mhd .as we shall see , this task is greatly entangled by the different nature of relativistic rotational waves across which the velocity component normal to the interface is no longer constant .the proposed algorithm has been implemented in the pluto code for astrophysical fluid dynamics which embeds a computational infrastructure for the solution of different sets of equations ( e.g. , euler , mhd or relativistic mhd conservation laws ) in the finite volume formalism .the paper is structured as follows : in [ sec : equations ] we briefly review the equations of relativistic mhd ( rmhd ) and formulate the problem . in [ sec : solver ] the new riemann solver is derived .numerical tests and astrophysical applications are presented in [ sec : num ] and conclusions are drawn in [ sec : conclusions ] .the equations of relativistic magnetohydrodynamics ( rmhd ) are derived under the physical assumptions of constant magnetic permeability and infinite conductivity , appropriate for a perfectly conducting fluid . in divergence form , they express particle number and energy - momentum conservation : & = & 0\ , , \\\noalign{\medskip } \label{eq : clind } \partial_\mu\left(u^\mu b^\nu - u^\nu b^\mu\right ) & = & 0\,,\end{aligned}\ ] ] where is the rest mass density , is the four - velocity ( lorentz factor , three velocity ) , and are the gas enthalpy and thermal pressure , respectively .the covariant magnetic field is orthogonal to the fluid four - velocity ( ) and is related to the local rest frame field by \,.\ ] ] in eq .( [ eq : clmomen ] ) , is the squared magnitude of the magnetic field .the set of equations ( [ eq : clmass])([eq : clind ] ) must be complemented by an equation of state which may be taken as the constant -law : where is the specific heat ratio .alternative equations of state ( see , for example , * ? ? ?* ) may be adopted . in the following we will be dealing with the one dimensional conservation law which follows directly from eq .( [ eq : clmass])-([eq : clind ] ) by discarding contributions from and .conserved variables and corresponding fluxes take the form : where , is the the density as seen from the observer s frame while , introducing ( total enthalpy ) and ( total pressure ) , are the momentum and energy densities , respectively . is the kronecker delta symbol .note that , since , the normal component of magnetic field ( ) does not change during the evolution and can be regarded as a parameter .this is a direct consequence of the condition . a conservative discretization of eq . ( [ eq:1dconslaw ] ) over a time step yields where is the mesh spacing and is the upwind numerical flux computed at zone faces by solving , for , the initial value problem defined by eq . ( [ eq:1dconslaw ] ) together with the initial condition where and are discontinuous left and right constant states on either side of the interfacethis is also known as the riemann problem . for a first order scheme , and . the decay of the initial discontinuity given by eq .( [ eq : riemann ] ) leads to the formation of a self - similar wave pattern in the plane where fast , slow , alfvn and contact modes can develop . at the double end of the riemann fan , two fast magneto - sonic waves bound the emerging pattern enclosing two rotational ( alfvn ) discontinuities , two slow magneto - sonic waves and a contact surface in the middle .the same patterns is also found in classical mhd .fast and slow magneto - sonic disturbances can be either shocks or rarefaction waves , depending on the pressure jump and the norm of the magnetic field .all variables ( i.e. density , velocity , magnetic field and pressure ) change discontinuously across a fast or a slow shock , whereas thermodynamic quantities such as thermal pressure and rest density remain continuous when crossing a relativistic alfvn wave .contrary to its classical counterpart , however , the tangential components of magnetic field trace ellipses instead of circles and the normal component of the velocity is no longer continuous across a rotational discontinuity , .finally , through the contact mode , only density exhibits a jump while thermal pressure , velocity and magnetic field remain continuous .the complete analytical solution to the riemann problem in rmhd has been recently derived in closed form by and number of properties regarding simple waves are also well established , see .for the special case in which the component of the magnetic field normal to a zone interface vanishes , a degeneracy occurs where tangential , alfvn and slow waves all propagate at the speed of the fluid and the solution simplifies to a three - wave pattern , see .the high degree of nonlinearity inherent to the rmhd equations makes seeking for an exact solution prohibitive in terms of computational costs and efficiency . for this reasons ,approximate methods of solution are preferred instead .and are connected to each other through a set of five waves representing , clockwise , a fast shock , a rotational discontinuity , a contact wave , a rotational discontinuity and a fast shock .the outermost states , and are given as input to the problem , whereas the others must be determined consistently solving the rankine - hugoniot jump conditions.,scaledwidth=50.0% ] without loss of generality , we place the initial discontinuity at and set .following mk , we make the assumption that the riemann fan can be divided by waves : two outermost fast shocks , and , enclosing two rotational discontinuities , and , separated by the entropy ( or contact ) mode with speed .note that slow modes are not considered in the solution .the five waves divide the plane into the six regions shown in fig [ fig : fan ] , corresponding ( from left to right ) to the states with . the outermost states ( and ) are given as input to the problem , while the remaining ones have to be determined . in the typical approach used to construct hll - based solvers , the outermost velocities and are also provided as estimates from the input left and right states . as in mb, we choose to use the simple davis estimate . acrossany given wave , states and fluxes must satisfy the jump conditions {\lambda } \equiv \big(\lambda{\bmath{u } } - { \bmath{f}}\big)_+ - \big(\lambda{\bmath{u } } - { \bmath{f}}\big)_- = 0\,,\ ] ] where and identify , respectively , the state immediately ahead or behind the wave front .note that for consistency with the integral form of the conservation law over the rectangle \times[0 , \delta t] ] , the previous equations further imply that ( when ) also , , and do not change across : being invariant , can be computed from the state lying to the left ( for ) or to the right ( for ) of the discontinuity , thus being a function of the total pressure alone . instead of using eq .( [ eq : k ] ) , an alternative and more convenient expression may be found by properly replacing with in eq .( [ eq : jumpd])([eq : jumpbk ] ) .after some algebra one finds the simpler expression still being a function of the total pressure .note that , similarly to its non relativistic limit , we can not use the equations in ( [ eq : alfd])([eq : alfb ] ) to compute the solution across the rotational waves , since they do not provide enough independent relations .instead , a solution may be found by considering the jump conditions across both rotational discontinuities and properly matching them using the conditions at the contact mode . at the contact discontinuity ( cd )only density and total enthalpy can be discontinuous , while total pressure , normal and tangential fields are continuous as expressed by eq .( [ eq : contact_conditions ] ) .since the magnetic field is a conserved quantity , one can immediately use the consistency condition between the innermost waves and to find across the cd .indeed , from one has , where {ar } - \left[b^k(\lambda - v^x ) + b^xv^k\right]_{al } } { \lambda_{ar } - \lambda_{al } } \ , . \ ] ] since quantities in the and regions are given in terms of the unknown , eq .( [ eq : bc ] ) are also functions of alone . at this point , we take advantage of the fact that to replace with and then rewrite ( [ eq : k ] ) as the previous equations form a linear system in the velocity components and can be easily inverted to the left and to the right of the cd to yield which also depend on the total pressure variable only , with and being given by ( [ eq : fastw ] ) and ( [ eq : k_simple ] ) and the s being computed from eq .( [ eq : bc ] ) . imposing continuity of the normal velocity across the cd , , results in = 0 \,,\ ] ] where is a function of only and is the numerator of ( [ eq : bc ] ) and .equation ( [ eq : final ] ) is a nonlinear function in and must be solved numerically .once the iteration process has been completed and has been found to some level of accuracy , the remaining conserved variables to the left and to the right of the cd are computed from the jump conditions across and and the definition of the flux , eq .( [ eq : flux ] ) .specifically one has , for or , this concludes the derivation of our riemann solver . in the previous sections we have shown that the whole set of jump conditions can be brought down to the solution of a single nonlinear equation , given by ( [ eq : final ] ) , in the total pressure variable . in the particular case of vanishing normal component of the magnetic field ,i.e. , this equation can be solved exactly as discussed in [ sec : bx0 ] . for the more general case, the solution has to be found numerically using an iterative method where , starting from an initial guess , each iteration consists of the following steps : * given a new guess value to the total pressure , start from eq .( [ eq : vxa])([eq : vza ] ) to express and as functions of the total pressure .also , express magnetic fields , and total enthalpies , using eq .( [ eq : bv ] ) and eq .( [ eq : fastw ] ) , respectively . *compute and using eq .( [ eq : k_simple ] ) and the transverse components of using eq .( [ eq : bc ] ) .( [ eq : final ] ) to find the next improved iteration value . for the sake of assessing the validity of our new solver , we choose the secant method as our root - finding algorithm .the initial guess is provided using the following prescription : where is the total pressure computed from the hll average state whereas is the solution in the limiting case .extensive numerical testing has shown that the total pressure computed from the hll average state provides , in most cases , a sufficiently close guess to the correct physical solution , so that no more than iterations ( for zones with steep gradients ) were required to achieve a relative accuracy of .the computational cost depends on the simulation setting since the average number of iterations can vary from one problem to another . however , based on the results presented in [ sec : num ] , we have found that hlld was at most a factor of slower than hll . for a solution to be physically consistent andwell - behaved , we demand that hold simultaneously .these conditions guarantee positivity of density and that the correct eigenvalue ordering is always respected .we warn the reader that equation ( [ eq : final ] ) may have , in general , more than one solution and that the conditions given by ( [ eq : conditions ] ) may actually prove helpful in selecting the correct one .however , the intrinsic nonlinear complexity of the rmhd equations makes rather arduous and challenging to prove , _ a priori _ , both the existence and the uniqueness of a physically relevant solution , in the sense provided by ( [ eq : conditions ] ) . on the contrary, we encountered sporadic situations where none of the zeroes of eq .( [ eq : final ] ) is physically admissible .fortunately , these situations turn out to be rare eventualities caused either by a large jump between left and right states ( as at the beginning of integration ) or by under- or over- estimating the propagation speeds of the outermost fast waves , and .the latter conclusion is supported by the fact that , enlarging one or both wave speeds , led to a perfectly smooth and unique solution. therefore , we propose a safety mechanism whereby we switch to the simpler hll riemann solver whenever at least one or more of the conditions in ( [ eq : conditions ] ) is not fulfilled . from several numerical tests , including the ones shown here , we found the occurrence of these anomalies to be limited to few zones of the computational domain , usually less than in the tests presented here .we conclude this section by noting that other more sophisticated algorithms may in principle be sought .one could , for instance , provide a better guess to the outer wave - speeds and or even modify them accordingly until a solution is guaranteed to exist .another , perhaps more useful , possibility is to bracket the solution inside a closed interval ] and the discontinuity is placed at .the resolution and final integration time can be found in the last two columns of table 1 .unless otherwise stated , we employ the constant law with .the rmhd equations are solved using the first - order accurate scheme ( [ eq:1st_ord ] ) with a cfl number of .numerical results are compared to the hllc riemann solver of mb and to the simpler hll scheme and the accuracy is quantified by computing discrete errors in l-1 norm : where is the first - order numerical solution ( density or magnetic field ) , is the reference solution at and is the mesh spacing . for tests we obtained a reference solution using the second - order scheme of mb on zones and adaptive mesh refinement with levels of refinement ( equivalent resolution grid points ) .grid adaptivity in one dimension has been incorporated in the pluto code using a block - structured grid approach following . for test ,we use the exact numerical solution available from .errors ( in percent ) are shown in fig . [fig : error ] . .density and component of magnetic field are shown , respectively .the different symbols show results computed with the new hlld solver ( filled circles ) , the hllc solver ( crosses ) and the simpler hll solver ( plus signs ) .note that only hlld is able to capture exactly both discontinuities by keeping them perfectly sharp without producing any grid diffusion effect .hllc can capture the contact wave but not the rotational discontinuity , whereas hll spreads both of them on several grid zones.,scaledwidth=50.0% ] we now show that our hlld solver can capture _ exactly _ isolated contact and rotational discontinuities .the initial conditions are listed at the beginning of table 1 . in the case of an isolated stationary contact wave ,only density is discontinuous across the interface . the left panel in fig .[ fig : isolated ] shows the results at computed with the hlld , hllc and hll solvers : as expected our hlld produces no smearing of the discontinuity ( as does hllc ) . on the contrary , the initial jump broadens over several grid zone when employing the hll scheme . across a rotational discontinuity ,scalar quantities such as proper density , pressure and total enthalpy are invariant but vector fields experience jumps .the left and right states on either side of an exact rotational discontinuity can be found using the procedure outlined in the appendix .the right panel in fig .[ fig : isolated ] shows that only hlld can successfully keep a sharp resolution of the discontinuity , whereas both hllc and hll spread the jump over several grid points because of the larger numerical viscosity . .computations are carried on zones using the hlld ( solid line ) , hllc ( dashed line ) and hll ( dotted line ) riemann solver , respectively .the top panel shows , from left to right , the rest mass density , gas pressure , total pressure .the bottom panel shows the and components of velocity and the component of magnetic field.,scaledwidth=50.0% ] .density and the two components of velocity are shown in the left , central and right panels , respectively .diamonds , crosses and plus signs are used for the hlld , hllc and hll riemann solver , respectively.,scaledwidth=50.0% ] the first shock tube test is a relativistic extension of the brio wu magnetic shock tube and has also been considered by and in mb .the specific heat ratio is .the initial discontinuity breaks into a left - going fast rarefaction wave , a left - going compound wave , a contact discontinuity , a right - going slow shock and a right - going fast rarefaction wave .rotational discontinuities are not generated . in figs .[ fig : st1]-[fig : st1_close ] we plot the results obtained with the first - order scheme and compare them with the hllc riemann solver of mb and the hll scheme .although the resolution across the continuous right - going rarefaction wave is essentially the same , the hlld solver offers a considerable improvement in accuracy in the structures located in the central region of the plots . indeed , fig .[ fig : st1_close ] shows an enlargement of the central part of the domain , where the compound wave ( at ) , contact ( ) and slow shock ( ) are clearly visible . besides the steeper profiles of the contact and slow modes , it is interesting to notice that the compound wave , composed of a slow shock adjacent to a slow rarefaction wave , is noticeably better resolved with the hlld scheme than with the other two .these results are supported by the convergence study shown in the top left panel of fig .[ fig : error ] , demonstrating that the errors obtained with our new scheme are smaller than those obtained with the hllc and hll solvers ( respectively ) . at the largest resolution employed , for example ,the l-1 norm errors become and smaller than the hll and hllc schemes , respectively . the cpu times required by the different riemann solvers on this particular testwere found to be scale as . on grid points . from left to right , the top panel shows density , gas and total pressure .the middle panel shows the three components of velocity , whereas in the bottom panel we plot the lorentz factor and the transverse components of magnetic field .solid , dashed and dotted lines are used to identify results computed with hlld , hllc and hll , respectively.,scaledwidth=50.0% ] around the contact wave .middle and right panels : close - ups of the component of velocity and component of magnetic field around the right - going slow shock and alfvn discontinuity .different symbols refer to different riemann solver , see the legend in the left panel.,scaledwidth=50.0% ] this test has also been considered in and in mb and the initial condition comes out as a non - planar riemann problem implying that the change in orientation of the transverse magnetic field across the discontinuity is ( thus different from zero or ) .the emerging wave pattern consists of a contact wave ( at ) separating a left - going fast shock ( ) , alfvn wave ( ) and slow rarefaction ( ) from a slow shock ( ) , alfvn wave ( ) and fast shock ( ) heading to the right .computations carried out with the order accurate scheme are shown in fig .[ fig : st2 ] using the hlld ( solid line ) , hllc ( dashed line ) and hll ( dotted line ) .the resolution across the outermost fast shocks is essentially the same for all riemann solvers . across the entropy modeboth hlld and hllc attain a sharper representation of the discontinuity albeit unphysical undershoots are visible immediately ahead of the contact mode .this is best noticed in the the left panel of fig .[ fig : st2_close ] , where an enlargement of the same region is displayed .on the right side of the domain , the slow shock and the rotational wave propagate quite close to each other and the first - order scheme can barely distinguish them at a resolution of zones .however , a close - up of the two waves ( middle and right panel in fig .[ fig : st2_close ] ) shows that the proposed scheme is still more accurate than hllc in resolving both fronts . on the left hand side ,the separation between the alfvn and slow rarefaction waves turns out to be even smaller and the two modes blur into a single wave because of the large numerical viscosity .this result is not surprising since these features are , in fact , challenging even for a second - order scheme .discrete l-1 errors computed using eq .( [ eq : error ] ) are plotted as function of the resolution in the top right panel of fig .[ fig : error ] . for this particular test ,hlld and hllc produce comparable errors ( and at the highest resolution ) while hll performs worse on contact , slow and alfvn waves resulting in larger deviations from the reference solution .the computational costs on grid zones has found to be . .from top to bottom , left to right , the panels show density , gas pressure , total pressure , and components of velocity ( ) and component of magnetic field .the components have been omitted since they are identical to the components .solid , dashed and dotted lines refer to computations obtained with the hlld , hllc and hll solvers . computational zones were used in the computations.,scaledwidth=50.0% ] .filled circles crosses and plus sign have the same meaning as in fig .[ fig : st2_close ] .note the wall heating problem evident in the density profile ( left panel ) .central and right panels show the transverse field profiles . clearlythe resolution of the slow shocks ( ) improves from hll to hllc and more from hllc to hlld.,scaledwidth=50.0% ] in this test problem we consider the interaction of two oppositely colliding relativistic streams , see also and mb .after the initial impact , two strong relativistic fast shocks propagate outwards symmetrically in opposite direction about the impact point , , see fig .[ fig : st3 ] . being a co - planar problem ( i.e. the initial twist angle between magnetic fields is ) no rotational mode can actually appear .two slow shocks delimiting a high pressure constant density region in the center follow behind .although no contact wave forms , the resolution across the slow shocks noticeably improves changing from hll to hllc and from hllc to hlld , see fig .[ fig : st3 ] or the enlargement of the central region shown in fig .[ fig : st3_close ] . the resolution across the outermost fast shocks is essentially the same for all solvers .the spurious density undershoot at the center of the grid is a notorious numerical pathology , known as the wall heating problem , often encountered in godunov - type schemes .it consists of an undesired entropy buildup in a few zones around the point of symmetry .our scheme is obviously no exception as it can be inferred by inspecting see fig .[ fig : st3 ] .surprisingly , we notice that error hlld performs slightly better than hllc .the numerical undershoots in density , in fact , are found to be ( hlld ) and ( hllc ) .the hll solver is less prone to this pathology most likely because of the larger numerical diffusion , see the left panel close - up of fig .[ fig : st3_close ] . errors ( for ) are computed using the exact solution available from which is free from the pathology just discussed . as shown in the bottom left panel of fig .[ fig : error ] , hlld performs as the best numerical scheme yielding , at the largest resolution employed ( zones ) , l-1 norm errors of to be compared to and of hllc and hll , respectively .the cpu times for the different solvers on this problem follow the proportion . on computational zones .the panels are structured in a way similar to fig .[ fig : st2 ] .top panel : density , gas pressure and total pressure .mid panel : , and velocity components .bottom panel : lorentz factor and transverse components of magnetic field.,scaledwidth=50.0% ] .the left panel shows the density profile where the two slow shocks and the central contact wave are clearly visible .central and right panels display the components of velocity and magnetic field .rotational modes can be most clearly distinguished only with the hlld solver at and .,scaledwidth=50.0% ] ) for the four shock tube problems presented in the text as function of the grid resolution .the different combinations of lines and symbols refer to hlld ( solid , circles ) , hllc ( dashed , crosses ) and hll ( dotted , plus signs).,scaledwidth=50.0% ] the fourth shock tube test is taken from the generic alfvn " test in .the breaking of the initial discontinuous states leads to the formation of seven waves . to the left of the contact discontinuity one has a fast rarefaction wave , followed by a rotational wave and a slow shock .traveling to the right of the contact discontinuity , one can find a slow shock , an alfvn wave and a fast shock .we plot , in fig .[ fig : st4 ] , the results computed with the hlld , hllc and hll riemann solvers at , when the outermost waves have almost left the outer boundaries . the central structure ( ) is characterized by slowly moving fronts with the rotational discontinuities propagating very close to the slow shocks . at the resolution employed ( zones ) , the rotational and slow modes appear to be visible and distinct only with the hlld solver , whereas they become barely discernible with the hllc solver and completely blend into a single wave using the hll scheme .this is better shown in the enlargement of and profiles shown in fig .[ fig : st4_close ] : rotational modes are captured at and with the hlld solver and gradually disappear when switching to the hll scheme . at the contact wave hlld and hllcbehave similarly but the sharper resolution attained at the left - going slow shock allows to better capture the constant density shell between the two fronts .our scheme results in the smallest errors and numerical dissipation and exhibits a slightly faster convergence rate , see the plots in the bottom right panel of fig .[ fig : error ] . at low resolutionthe errors obtained with hll , hllc and hlld are in the ratio while they become as the mesh thickens .correspondingly , the cpu running times for the three solvers at the resolution shown in table [ tab : ic ] have found to scale as .this example demonstrates the effectiveness and strength of adopting a more complete riemann solver when describing the rich and complex features arising in relativistic magnetized flows .we have implemented our wave riemann solver into the framework provided by the pluto code . the constrained transport method is used to evolve the magnetic field .we use the third - order , total variation diminishing runge kutta scheme together with piecewise linear reconstruction . .panels on the left show the density map ( at ) in the plane at while panels to the right show the density in the plane at .,scaledwidth=50.0% ] but showing the total pressure in the ( left ) and ( right ) panels for the hlld solver.,scaledwidth=50.0% ] ( left ) and ( right ) axis showing the density profiles at different resolutions ( and ) and with different solvers .solid , dashed and dotted lines are used for the hlld solver whereas plus and star symbols for hll.,scaledwidth=50.0% ] we consider a three dimensional version of the standard rotor problem .the initial condition consists of a sphere with radius centered at the origin of the domain taken to be the unit cube ^ 3 ] with a shear velocity profile given by density and pressure are set constant everywhere and initialized to , , while magnetic field components are given in terms of the poloidal and toroidal magnetization parameters and as where we use , .the shear layer is perturbed by a nonzero component of the velocity , \,,\ ] ] with , while we set .computations are carried at low ( l , zones ) , medium ( m , zones ) and high ( h , zones ) resolution . at low ( l ) , medium ( m ) andhigh ( h ) resolutions .solid , dashed and dotted lines show results pertaining to hlld , whereas symbols to hll .bottom : small scale power as a function of time for the kelvin - helmholtz application test .integrated power is given by where is the complex , discrete fourier transform of taken along the direction . here is the nyquist critical frequency.,scaledwidth=50.0% ] for the perturbation follows a linear growth phase leading to the formation of a multiple vortex structure . in the high resolution ( h )case , shown in fig [ fig : kh ] , we observe the formation of a central vortex and two neighbor , more stretched ones .these elongated vortices are not seen in the computation of who employed the hll solver at our medium resolution .as expected , small scale patterns are best spotted with the hlld solver , while tend to be more diffused using the two - wave hll scheme .the growth rate ( computed as , see top panel in fig .[ fig : kh2 ] ) , is closely related to the poloidal field amplification which in turn proceeds faster for smaller numerical resistivity ( see the small sub - plot in the same panel ) and thus for finer grids .still , computations carried with the hlld solver at low ( l ) , medium ( m ) and high ( h ) resolutions reveal surprisingly similar growth rates and reach the saturation phase at essentially the same time ( ) . on the contrary , the saturation phase and the growth rate during the linear phase change with resolution when the hll scheme is employed .field amplification is prevented by reconnection events during which the field wounds up and becomes twisted by turbulent dynamics . throughout the saturation phase ( mid and right panel in fig [ fig : kh ] )the mixing layer enlarges and the field lines thicken into filamentary structures .small scale structure can be quantified by considering the power residing at large wave numbers in the discrete fourier transform of any flow quantity ( we consider the component of velocity ) .this is shown in the bottom panel of fig [ fig : kh2 ] where we plot the integrated power between and as function of time ( is the nyquist critical frequency ) .indeed , during the statistically steady flow regime ( ) , the two solvers exhibits small scale power that differ by more than one order of magnitude , with hlld being in excess of ( at all resolutions ) whereas hll below . in terms of cpu time , computations carried out with hlld ( at medium resolution ) were slower than hll . at the resolution of points per beam radius . in clockwise direction , starting from the top right quadrant : density logarithm , gas pressure logarithm , thermal to total pressure ratio and component of magnetic field .the color scale has been normalized such that the maximum and minimum values reported in each subplots correspond to and .,scaledwidth=50.0% ] \times[10,18] ] and ] .we choose , , thus corresponding to a jet close to equipartition .the external environment is initially static ( ) , heavier with density and threaded only by the constant longitudinal field .pressure is set everywhere to the constant value .we carry out computations at the resolutions of and zones per beam radius ( ) and follow the evolution until . the snapshot in fig .[ fig : jet ] shows the solution computed at at the highest resolution . the morphological structure is appreciably affected by the magnetic field topology and by the ratio of the magnetic energy density to the rest mass , .the presence of a moderately larger poloidal component and a small poynting flux favor the formation of a hammer - like structure rather than a nose cone ( see * ? ? ?* ; * ? ? ?at the termination point , located at , the beam strongly decelerates and expands radially promoting vortex emission at the head of the jet . close to the axis, the flow remains well collimated and undergoes a series of deceleration / acceleration events through a series of conical shocks , visible at . behind these recollimation shocks, the beam strongly decelerates and magnetic tension promotes sideways deflection of shocked material into the cocoon .the ratio ( bottom left quadrant in fig [ fig : jet ] ) clearly marks the kelvin - helmholtz unstable slip surface separating the backflowing , magnetized beam material from the high temperature ( thermally dominated ) shocked ambient medium . in the magnetically dominated region turbulencedissipate magnetic energy down to smaller scales and mixing occurs .the structure of the contact discontinuity observed in the figures does not show suppression of kh instability .this is likely due to the larger growth of the toroidal field component over the poloidal one .however we also think that the small density ratio ( ) may favor the growth of instability and momentum transfer through entrainment of the external medium . for the sake of comparison, we also plot ( fig [ fig : jet2 ] ) the magnitude of the poloidal magnetic field in the region ] where turbulent patterns have developed . at the resolution of points per beam radius, hlld discloses the finest level of small scale structure , whereas hll needs approximately twice the resolution to produce similar patterns .this behaviour is quantitatively expressed , in fig [ fig : jet3 ] , by averaging the gradient over the volume .roughly speaking , hll requires a resolution that of hlld to produce pattern with similar results .a five - wave hlld riemann solver for the equations of relativistic magnetohydrodynamics has been presented .the solver approximates the structure of the riemann fan by including fast shocks , rotational modes and the contact discontinuity in the solution .the gain in accuracy comes at the computational cost of solving a nonlinear scalar equation in the total pressure . as such, it better approximates alfvn waves and we also found it to better capture slow shocks and compound waves .the performance of the new solver has been tested against selected one dimensional problems , showing better accuracy and convergence properties than previously known schemes such as hll or hllc .applications to multi - dimensional problems have been presented as well .the selected tests disclose better resolution of small scale structures together with reduced dependency on grid resolution .we argue that three dimensional computations may actually benefit from the application of the proposed solver which , albeit more computationally intensive than hll , still allows to recover comparable accuracy and resolution with a reduced number of grid zones . indeed , since a relative change in the mesh spacing results in a factor in terms of cpu time , this may largely favour a more sophisticated solver over an approximated one .this issue , however , need to receive more attention in forthcoming studies .anile , m. , & pennisi , s. 1987 , ann . inst .henri poincar , 46 , 127 anile , a. m. 1989 , relativistic fluids and magneto - fluids ( cambridge : cambridge university press ) , 55 balsara , d. s. 2001 , apjs , 132 , 83 berger , m. j. and colella , p. j. comput .phys . , 82 , pp .64 - 84 , 1989 .brio , m. , & wu , c .- c .1988 , j. comput .phys . , 75 , 400 bucciantini , n. , & del zanna , l. 2006 , astronomy & astrophysics , 454 , 393 s.f .davis , siam j. sci .statist . comput . 9 ( 1988 ) 445 .einfeldt , b. , munz , c.d . , roe , p.l . , and sj " ogreen , b. 1991 , j. comput .phys . , 92 , 273 del zanna , l. , bucciantini , n. , & londrillo , p. 2003 , astronomy & astrophysics , 400 , 397 ( dzbl ) del zanna , l. , zanotti , o. , bucciantini , n. , & londrillo , p. 2007 , astronomy & astrophysics , 473 , 11 gammie , c. f. , mckinney , j. c. , & tth , g. 2003 , the astrophysical journal , 589 , 444 gehmeyr , m. , cheng , b. , & mihalas , d. 1997 , shock waves , 7 , 255 giacomazzo , b. , & rezzolla , l. 2006 , journal of fluid mechanics , 562 , 223 gurski , k.f .2004 , siam j. sci .comput , 25 , 2165 harten , a. , lax , p.d . , and van leer , b. 1983 , siam review , 25(1):35,61 honkkila , v. , & janhunen , p. 2007, journal of computational physics , 223 , 643 jeffrey a. , taniuti t. , 1964 , non - linear wave propagation . academic press , new york keppens , r. , meliani , z. , van der holst , b. , & casse , f. 2008 , astronomy & astrophysics , 486 , 663 koldoba , a. v. , kuznetsov , o. a. , & ustyugova , g. v. 2002 , mnras , 333 , 932 komissarov , s. s. 1997 , phys .a , 232 , 435 komissarov , s. s. 1999 , mnras , 308 , 1069 leismann , t. , antn , l. , aloy , m. a. , mller , e. , mart , j. m. , miralles , j. a. , & ibez , j. m. 2005 , astronomy & astrophysics , 436 , 503 li s. , 2005 , j. comput .344 - 357 lichnerowicz , a. 1976 , journal of mathematical physics , 17 , 2135 lichnerowicz , a. 1967 , relativistic hydrodynamics and magnetohydrodynamics , new york : benjamin , 1967 mignone , a. , & bodo , g. 2005 , mnras , 364 , 126 mignone , a. , massaglia , s. , & bodo , g. 2005 , space science reviews , 121 , 21 mignone , a. , & bodo , g. 2006 , mnras , 368 , 1040 ( mb ) mignone , a. , bodo , g. , massaglia , s. , matsakos , t. , tesileanu , o. , zanni , c. , & ferrari , a. 2007 , astrophysical journal supplement , 170 , 228 mignone , a. , & mckinney , j. c. 2007 , mnras , 378 , 1118 t. miyoshi , k. kusano , k. , j. comp . phys . 208 ( 2005 ) 315 ( mk ) noh , w.f .1987 , j. comput .phys . , 72,78 press , w. , s. teukolsky , w. vetterling , and b. flannery ( 1992 ) .numerical recipes in c ( 2nd ed . ) .cambridge , uk : cambridge university press .romero , r. , mart , j. m. , pons , j. a. , ibez , j. m. , & miralles , j. a. 2005 , journal of fluid mechanics , 544 , 323 rossi , p. , mignone , a. , bodo , g. , massaglia , s. , & ferrari , a. 2008 , astronomy & astrophysics , 488 , 795 toro , e. f. , spruce , m. , and speares , w. 1994 , shock waves , 4 , 25 toro , e. f. 1997 , riemann solvers and numerical methods for fluid dynamics , springer - verlag , berlin van der holst , b. , keppens , r. , & meliani , z. 2008 , arxiv:0807.0713left and right states across a rotational discontinuity can be found using the results outlined in [ sec : alfven ] .more specifically , we construct a family of solutions parameterized by the speed of the discontinuity and one component of the tangential field on the right of the discontinuity . our procedure can be shown to be be equivalent to that of .specifically , one starts by assigning on the left side of the front ( ) together with the speed of the front , .note that can not be freely assigned but must be determined consistently from eq .( [ eq : kvb ] ) . expressing ( ) in terms of and and substituting back in the component of ( [ eq : kvb ] ), one finds that there are two possible values of satisfying the quadratic equation where the coefficients of the parabola are and with , and being the lorentz factor .the transverse components of are computed as on the right side of the front , one has that , , , and are the same , see [ sec : alfven ] . since the transverse fieldis elliptically polarized , there are in principle infinite many solutions and one has the freedom to specify , for instance , one component of the field ( say ) .the velocity and the component of the field can be determined in the following way .first , use equation ( [ eq : vc ] ) to express ( ) as function of for given and . using the jump condition for the density together with the fact that is invariant, we solve the nonlinear equation whose roots gives the desired value of . | we present a five - wave riemann solver for the equations of ideal relativistic magnetohydrodynamics . our solver can be regarded as a relativistic extension of the five - wave hlld riemann solver initially developed by miyoshi and kusano for the equations of ideal mhd . the solution to the riemann problem is approximated by a five wave pattern , comprised of two outermost fast shocks , two rotational discontinuities and a contact surface in the middle . the proposed scheme is considerably more elaborate than in the classical case since the normal velocity is no longer constant across the rotational modes . still , proper closure to the rankine - hugoniot jump conditions can be attained by solving a nonlinear scalar equation in the total pressure variable which , for the chosen configuration , has to be constant over the whole riemann fan . the accuracy of the new riemann solver is validated against one dimensional tests and multidimensional applications . it is shown that our new solver considerably improves over the popular hll solver or the recently proposed hllc schemes . [ firstpage ] hydrodynamics - mhd - relativity - shock waves - methods : numerical |
a dozen years ago , it was shown that introducing the 1918 hemagglutinin ( ha ) confers enhanced pathogenicity in mice to recent human viruses that are otherwise non - pathogenic in this host . moreover , like the 1918 one , these recombinant viruses infect the entire lung and induce high levels of macrophage - derived chemokines and cytokines , which results in infiltration of inflammatory cells and severe haemorrhage . in macaques ,the whole 1918 virus causes a highly pathogenic respiratory infection that culminates in acute respiratory distress and a fatal outcome . although the 1918 polymerase genes were found essential for optimal virulence , replacing the 1918 ha by a contemporary human one proved enough for abolishing the lethal outcome of the 1918 virus infection in mices , further underlining the key role of the 1918 ha in the deadly process .the first 1918 ha sequences were obtained in 1999 from formalin - fixed , paraffin - embedded lung tissue samples prepared during the autopsy of victims of the influenza pandemic , as well as from a frozen sample obtained by _ in situ _ biopsy of the lung of a victim buried in permafrost since 1918 . since then , the number of ha sequences determined each year has grown dramatically , jumping from in the nineties to ,000 per year nowadays .the goal of the present work is to take advantage of this wealth of data for identifying features that are unique to the 1918 sequence , the underlying hypothesis being that they may prove responsible for the unique behaviour of viruses displaying the 1918 ha . -0.352006 ha different protein sequences were retrieved , 2016 . ] from the ncbi influenza virus resource , sequences coming from laboratory viral strains being disregarded .multiple pairwise sequence alignment was performed using blast , version 2.2.19 , taking as a reference the long h1 sequence from virus a / thailand / cu - mv10/2010 .mview , version 1.60.1 , was used for converting the blast output into a standard multiple sequence alignement ( msa ) . in this msa , 3365 different amino - acid residuesare observed at least 20 times , that is , 6.2 per position along the sequence , on average . since 1999 , a dozen 1918 ha sequences have been determined , at least partially .they differ , at most , by a couple of mutations .one of them was chosen as a representative , namely , the sequence found in pdb structure 4eef ( with h3 subtype residue numbering ) .as shown in figure [ fig : idoft ] , known ha sequences of post-1919 viruses are _ all _ less than 95% identical to the 1918 one , that is , complete ones differ from the 1918 sequence by at least 25 amino - acid substitutions .strinkingly , after 1919 , ha sequences of human viruses are less than 91.4% identical to the 1918 ha , a % level of sequence identity being observed in the thirties as well as more recently , though in a few instances only . on the other hand , ha sequences of avian viruses more than 93% identical to the1918 ha have been observed each year since 2005 , suggesting that a selection pressure favoring 1918 ha - like sequences is at work in avian species , in line with the hypothesis of an avian origin for the 1918 - 1919 pandemic .moreover , among the 41 post-1950 sequences that are more than 93% identical to the 1918 ha , 35 ( 85% ) come from duck species , further pinpointing aquatic birds as a possible reservoir .._number of post-1919 hemagglutinin sequences with same residue as the 1918 sequence ._ top : residues found in less than 50 human h1 sequences .bottom : key residues involved in receptor binding .bold : residue index of an highly conserved residue , that is , a residue found in more than 95% of the h1 sequences . subtype residue numbering ; subunit ; subunit .[ cols="^,^,^,^,^,^ " , ] however , the 1918 ha sequence is a singular one .for instance , 17 amino - acid residues in this sequence are found in less than 1% of other human h1 sequences .the ten less frequent ones are shown on top of table [ tbl : mutations ] .most of them are frequent in avian h1 sequences , or often found in sequences of other ha subtypes ( last column ) . as a striking exception , after 1919 , gly 188 has _ not _ been observed again in human h1 sequences .it has also _ not _ been observed in avian ones . as a matter of fact, it has only been observed in 47 h1 sequences , all of them from swine , a single time in 2003 , once each year between 2009 and 2012 , and several times each year since then . in human ha of other subtypes , it has been observed 11 times , in sequences from h3n2 or h5n1 viruses . since 1919 , gly 188 has _ only _ been observed 82 times , the first time in 2000 , in the sequence of an avian h9 ha .residue 188 is located at the n - terminus of the `` 190-helix '' , which is involved in the ha receptor binding site , but it does not interact directly with the receptor .three residues are usually observed at this position , namely , serine ( 41% ) , isoleucine ( 33% ) and threonine ( 21% ) .interestingly , proline which , like glycine , can have a direct impact on the secondary structure , the folding or the stability of a protein , is also rarely observed .this suggests that , taken alone , gly 188 is likely to be deleterious .however , since the 1918 virus proved efficient , one or several compensatory mutations have to be present in its ha sequence .* why has this mutation been overlooked ? * after 1919 , gly 188 has _ not _ been found again in human h1 sequences .overall , it has been found in _ only _ 0.2% of all known ha sequences .the reason why this singular mutation seems to have been overlooked is probably the following one : in the eleven 1918 - 1919 ha sequences known so far , gly 188 has been found ten times . in other words , there is an exception , namely , the ha sequence of virus a / london/1/1918 .* how may it be involved in the deadly process ? * gly 188 is located between his 183 and asp 190 , two key residues of the receptor binding site ( figure [ fig : hook ] ) .his 183 is highly conserved , being found in 99.9% of h1 sequences .it is thus likely to be involved in the specific recognition of the sialic acid moiety of the ha receptor . on the other hand, asp 190 is well conserved in h1 ha ( 88% ) but much less in all ha ( 56% ) , being frequently replaced by glu ( 32% ) .this suggests that it is involved in more subtle aspects of the recognition .indeed , a d190e mutation in the 1918 ha results in a preference for the sialic acid ( avian ) receptor .thus , given the inherent flexibility of gly residues , introducing gly at position 188 may allow for alternative orientations of asp 190 and , as a consequence , for the recognition of other conformations of the sialic acid moiety , or to sialic acid moities linked to di - saccharides other than gal - glcnac . though a glycan microarray analysis confirmed a specific recognition of sialic acid receptors by 1918 ha , with no binding when glcnac is absent , a change in its repertoire of ha receptors could indeed explain why the 1918 virus proved so deadly .gly 188 has rarely been observed in ha sequences ( table [ tbl : mutations ] ) .since it is located within a key motif of the ha receptor binding site ( figure [ fig : hook ] ) , this mutation may have an impact on receptor recognition and specificity .this , in turn , could explain why the 1918 ha confers enhanced pathogenicity .it is thus important to check the effects of this mutation on receptor function .meanwhile , it may prove important to monitor this mutation , noteworthy in swine viruses since it has been observed in swine h1 sequences each year since 2009 .reid , ah , fanning , tg , hultin , jv , taubenberger , jk ( 1999 ) origin and evolution of the 1918 `` spanish '' influenza virus hemagglutinin gene ._ proceedings of the national academy of sciences _ 96:16511656 .gunasekaran , k , nagarajaram , h , ramakrishnan , c , balaram , p ( 1998 ) stereochemical punctuation marks in protein structures : glycine and proline containing helix stop signals ._ journal of molecular biology _ 275:917932 .krieger , f , mglich , a , kiefhaber , t ( 2005 ) effect of proline and glycine residues on dynamics and barriers of loop formation in polypeptide chains ._ journal of the american chemical society _ 127:33463352 .nicholson , h , tronrud , d , becktel , w , matthews , b ( 1992 ) analysis of the effectiveness of proline substitutions and glycine replacements in increasing the stability of phage t4 lysozyme ._ biopolymers _ 32:14311441 .zhang , w et al . (2013 ) molecular basis of the receptor binding specificity switch of the hemagglutinins from both the 1918 and 2009 pandemic influenza a viruses by a d225 g substitution ._ journal of virology _ 87:59495958 .stevens , j et al .( 2006 ) glycan microarray analysis of the hemagglutinins from modern and pandemic influenza viruses reveals different receptor specificities . _journal of molecular biology _ 355:11431155 . | the influenza pandemic of 1918 - 1919 killed at least 50 million people . the reasons why this pandemic was so deadly remain largely unknown . however , it has been shown that the 1918 viral hemagglutinin allows to reproduce the hallmarks of the illness observed during the original pandemic . thanks to the wealth of hemagglutinin sequences accumulated over the last decades , amino - acid substitutions that are found in the 1918 - 1919 sequences but rare otherwise can be identified with high confidence . such an analysis reveals that gly188 , which is located within a key motif of the receptor binding site , is so rarely found in hemagglutinin sequences that , taken alone , it is likely to be deleterious . monitoring this singular mutation in viral sequences may help prevent another dramatic pandemic . |
there have been many development in statistics in which geometry , represented by an extension from euclidean to more general spaces , has proved fundamental .thus , reproducing kernel hilbert spaces is a part of gaussian process methods , sobolev spaces are used for , and besov spaces , for wavelets .differential geometry , riemannian manifolds , curvature and geodesics are at the core of information geometry , shape analysis and manifold learning .this paper is concerned with cat(0 ) , and , more generally , cat(k ) .although related to riemannian manifolds ( denotes the riemannian curvature ) these metric spaces have a less rigid structure and are qualitatively different from the spaces mentioned above .trees are among the first examples in which cat(0 ) properties have been used in statistics and bioinformatics : the set of trees ( the tree space ) becomes a geodesic metric space ( to be defined ) when for each tree , the edge lengths are allocated to entries in a vector of sufficient dimension to capture all the tree structures .then , the metric is the euclidean geodesic distance on a simplicial complex called a fan " which is induced by the tree structure : the geodesics are the shortest piecewise linear paths in the fan .such spaces have already been shown to be cat(0 ) by gromov s theory . for a random variable on a metric space endowed with a metric the general _ intrinsic mean _ is defined by the empirical intrinsic mean based on data , sometimes called the frchet mean , is defined as where for euclidean space , , the sample mean . in general , is not necessarily convex and the means , , are not unique .figure [ fig : hyperboloid ] shows that the curvature can affect the property of . in particular , for so - called cat(0 ) spaces , which ( trivially ) include euclidean spaces , the intrinsic means are unique . for data on( a ) a hyperboloid ( curvature ) , ( b ) a plane ( ) and ( c ) a sphere ( ) .the bluer represents the smaller value of . only for the sphere, has multiple minima .one hundred data points are sampled independently from a `` gaussian mixture , '' whose density is proportional to where and .,title="fig:",width=166 ] + ( a ) hyperboloid for data on ( a ) a hyperboloid ( curvature ) , ( b ) a plane ( ) and ( c ) a sphere ( ) .the bluer represents the smaller value of . only for the sphere, has multiple minima .one hundred data points are sampled independently from a `` gaussian mixture , '' whose density is proportional to where and .,title="fig:",width=166 ] + ( b ) plane for data on ( a ) a hyperboloid ( curvature ) , ( b ) a plane ( ) and ( c ) a sphere ( ) . the bluer represents the smaller value of . only for the sphere, has multiple minima .one hundred data points are sampled independently from a `` gaussian mixture , '' whose density is proportional to where and .,title="fig:",width=166 ] + ( c ) sphere even when the mean is not unique , the function can yield useful information , for example about clustering .we can also define second - order quantities : and a key concept in the study of these issues is that the metrics are global geodesic metrics , that is metrics based on the shortest path between points measured by integration along a path with respect to a local metric .the interplay between the global and the local will concern us to a considerable extent .for some intuition , consider a circle with arc length as distance and the uniform measure .the frchet intrinsic mean is the whole circle .we can say , informally , that the non - uniqueness arises because the space is not cat(0 ) .although the curvature is zero in the riemannian geometry sense , there are other considerations related to `` diameter '' , which prevents it from having the cat(0 ) property .if we use the embedding euclidean distance rather than the arc distance , we have the _ extrinsic mean _ , but again , we obtain the whole perimeter . see for further discussion on the intrinsic and extrinsic means .the empirical case is also problematic .for example , two data points on a diameter , with arc length as distance , give intrinsic means on the perpendicular diameter .simple geometry shows that the empirical extrinsic mean is the whole circle .we also cover the more general cat( ) spaces giving some new results related to diameter " as well as conditions for the uniqueness of intrinsic means not requiring the spaces to be cat(0 ) .the general form of depends , here , on three parameters , and it can be written in compact form : where the function and the construction of are given below .once we have introduced this new class of metrics , variety of statistics can be generalised ; intrinsic mean , variance , clustering ( based on local minima of ) . for classification problems , we can select an appropriate metric by cross - validation .theoretically , we have the opportunity to apply well studies areas of geometry compared with methods based on selection of a good `` loss function . '' in table [ table : generalised_statistics ] , we summarise such generalised statistics . ' '' '' + metrics & & ' '' '' + intrinsic mean & & ' '' '' + variance & & ' '' '' + clustering function & & ' '' '' + there are many ways to transform one metric into another , regardless of whether they are geodesic metrics .a straightforward way is to use a concave function such that given a metric , the new metric is .this is plausible if we use non - convex , which are useful , as will be explained , in clustering and classification .such concave maps are often interpreted as loss functions , but we will consider them in terms of a change of metric which may lead to selection using geometric concepts .this is particularly true for the construction of in this paper .the basic definition and construction from a geodesic metric space to the special geodesics based on accumulation of density are given in the next section , together with the definition of a cat(0 ) space .in section 3 , we first show that means and medians in simple one - dimensional statistics can be placed into our framework . because geodesics themselves are one - dimensional paths , this should provide some essential motivation .the -metric is obtained by a local dilation .our computational shortcut is to use _ empirical _ graphs , whose vertices are data points .we will need , therefore , to define empirical geodesics .we start with a natural geodesic defined via a probability density function in which the distance along a path is the amount of density accumulated " along that path .then , an empirical version is defined whenever a density is estimated .we might have based the paper on kernel density estimates ; instead , we have adopted a very geometric construction based on objects such as the the delaunay complex and its skeleton graph . in section 4 , the metric is introduced .it is based on a function derived from a geodesic metric via shrinking , pointwise , to an abstract origin ; that is to say an abstract cone is attached . the smaller the value of , the closer to the origin ( apex ) .the geometry of the two dimensional case drives our understanding of the general case because of the one - dimensional nature of geodesics .we prove that , for finite , the embedding cone is cat( ) with smaller than the original space .we cover the more general cat( ) spaces , giving some new results related to diameter " , in section 5 , including conditions for the uniqueness of intrinsic means not requiring the spaces to be cat(0 ) .section 6 provides a summary of the effect of changing and . after some discussion on the selection of and in section 7 , section 8 covers some examples .the fundamental object in this paper is a geodesic metric space .this is defined in two stages .first , define a metric space with base space and metric .sometimes , will be a euclidean space of dimension , containing the data points , but it may also be some special object such as a graph or manifold .second , define the length of a ( rectilinear ) path between two points and the geodesic connecting and as the shortest such path .this minimal length defines a metric , and the space endowed with the geodesic metric is called the geodesic metric space , .the interplay between and will be critical for this paper , and , as mentioned , we will have a number of ways of constructing . for data points in , the empirical intrinsic ( frchet ) mean is there are occasions when it is useful to represent as a sub - manifold of a larger space ( such as euclidean space ) with its own metric .we can then talk about the extrinsic mean : typically , the intrinsic mean is used as an alternative when the geodesic distance , is hard to compute .the difficultly in considering the intrinsic mean in is that it may not lie in the original base space .this leads to a third possibility , which is to project it back to , in some way , as an approximation to the intrinsic mean ( which may be hard to compute ) .we will discuss this again in section 4 .cat(0 ) spaces , which correspond to non - positive curvature riemannian spaces , are important here because their intrinsic means are unique .the cat(0 ) property is as follows .take any three points in a geodesic metric space and consider the geodesic triangle " of the points based on the geodesic segments connecting them .construct a triangle in euclidean 2-space with vertices , called the comparison triangle , whose euclidean distances , , are the same as the corresponding geodesic distances just described : , etc . on the geodesic triangleselect a point on the geodesic edge between and and find the point on the edge of the euclidean triangle such that .then the cat(0 ) condition is that for all and all choices of : for a cat(0 ) space ( i ) there is a unique geodesic between any two points , ( ii ) the space is contractible , in the topological sense , to a point and ( iii ) the intrinsic mean in terms of the geodesic distance is unique .cat( ) spaces , a generalization of cat(0 ) spaces , are explained in section [ sec : cat_k ] .let be a -dimensional euclidean random variable absolutely continuous with respect to the lebesgue measure , with density .let \} ] between and , and changing essentially changes the local curvature . roughly speaking ,when is more negative ( positive ) , the curvature is more negative ( positive ) . in the next subsection, we look at the one - dimensional case .although this case is elementary , good intuition is obtained by rewriting the standard version in terms of a geodesic metric .assume that is a continuous univariate random variable with probability density function and cumulative distribution function ( cdf ) .the mean achieves .here we are using the euclidean distance : .the median is defined by . on a geometric basis, we can say that achieves , where we use a metric that measures the amount of probability between and : carrying out the calculations : which achieves a minimum of at , as expected .now let us consider the sample version .let be the order statistics , which we assume are distinct .one of the first exercises in statistics is to show that minimises , with respect to .for the median , first consider using the first of the two approaches with the empirical cdf .we obtain various definitions depending on our definition of and , or just using convention . using the metric approachthe natural metric is to take with the standard definition of .applied to distinct data points this is equal to .for an arbitrary where .then , is achieved at when is odd and at when is even .another approach for the median would be to take a piecewise linear approximation to which is equivalent to having a density that is proportional to in the interval .then , the metric is and is achieved at when is odd and at , when is even .we can think of this last result in another way .consider the points as points in ] . for euclidean distances greater than , returns a constant distance of unity .the metric has the effect of downsizing large distances to unity . because , as will soon be seen , can be recognized as a geodesic metric of a cone embedding , we refer to the mean as the _ -extrinsic mean_. we consider a scheme in which the real line , or part of it , is mapped into the unit circle , where it can be represented by an angle . in the unit disk , ,a point is represented by polar coordinates : .we consider the top half of the unit disk , for which , namely we will now give a rule for travelling from a point in to the point p: . 1 .if , travel in a straight line to p. this is the geodesic for the euclidean metric 2 . if , travel first in straight line to the origin and then in a straight line from to p. we now apply a similar rule to points and . 1if , we take the euclidean distance 2 . if , we take , indicating a route to the origin and out to the other point .now , consider two points at and on the circumference of the circle ; then , in case ( d1 ) above , the distance between them is when , we obtain the value 2 . now , keeping fixed , let us rescale the distance inside the argument .in terms of the two points just described , we place the first point at p so that and scale the second with a factor .then , without changing the notation , and removing the factor of 2 , we have .then , implies that we may use the euclidean distance for a wider interval .case ( d2 ) above corresponds to the distance before rescaling and the new version achieves the value 1 .this corresponds precisely to the metric of this section .the extrinsic mean keeps the mean within the original data space but uses the metric .we will see below that the metric involves , even in the general case , an extension of the original space by a single dimension .this distinguishes it from the embeddings for extrinsic means used in the literature , which involve high - dimensional euclidean space , for example the case of the tree space .controlling , as will be seen below , controls the value of when the embedding space is considered as a cat( ) space .we have an indirect link between clustering and cat( ) spaces . as decreases while the embedding space becomes more cat(0 ) ( decreasing ) the original space becomes less cat(0 ) .this demonstrates , we believe , the importance of the cat( ) property in geodesic - based clustering . in euclidean space ,the standard euclidean distance dose not exhibit multiple local means because the space is trivially cat(0 ) .however , by using the -metric with a sufficiently small , the space can have multiple local means , as shown in figure [ fig:1dim_beta ] . against for different values of .,title="fig:",width=207 ] + ( a ) density function against for different values of .,title="fig:",width=207 ] + ( b ) + against for different values of .,title="fig:",width=207 ] + ( c ) against for different values of .,title="fig:",width=207 ] + ( d ) figure [ fig:1dim_beta_trajectory ] shows a plot of the local minima of against for the same samples as those in figure [ fig:1dim_beta ] . against .,width=264 ]the above construction is a special case of a general construction that applies to any geodesic metric space and hence to those in this paper .let be a geodesic metric space with a metric .a metric cone with is a cone \slash \mathcal{x } \times \{0\} ] spanned by .this cone can be isometrically embedded into an `` extended unit circular sector '' , i.e. a covering ,\theta \in ( -\infty , \infty)\}/ \{(0,\theta)\mid \theta \in ( -\infty , \infty)\} ] .then and are also mapped into the extended unit circular sector ; the distance for corresponds to the case ( d2 ) of a disk if we set and .this corresponds to the length of the blue line path in figure [ fig : metric - cone - explanation ] ( b1 ) and ( b2 ) .for further details on metric cones , refer to ..,title="fig:",height=102 ] .,title="fig:",height=94 ]the following result indicates that the metric cone space preserves the cat(0 ) property of the original space and the smaller values of continue this process .see section [ sec : versus ] for more details .[ cone ] 1 .if is a cat(0 ) space , the metric cone is also cat(0 ) for every .if is cat(0 ) , is also cat(0 ) for .if is cat( ) for , becomes cat(0 ) for . the proof is given in appendix [ proof - cone ] .a cat( ) space for is a geodesic metric space satisfying the following cat( ) condition .take any geodesic triangle whose perimeter is less than and select any point on a geodesic .let be a comparison triangle , which has the same edge length as , on a surface with a constant curvature , i.e. a sphere with radius for , a plane for and a hyperbolic space for . set on satisfying .then the cat( ) condition is that for all and all choices of , .thus every cat( ) space is a cat( ) space for .every metric graph is cat( ) for if and only if there is no loop shorter than because every metric tree is cat(0 ) and therefore cat( ) for .let be a geodesic metric space and fix it throughout this section .the diameter of a subset is defined as the length of the longest geodesic in .we define classes , and as follows .1 . : the class of subsets such that the geodesic distance function is strictly convex on for each .here , `` convex '' means geodesic convex , i.e. a function on is convex iff for every geodesic on , is convex with respect to .2 . for ] .if , is a strictly convex function on for each ; hence , is strictly convex for any probability measure whose support is in and non - empty .thus , .next , assume that and ; then , there are at least two different geodesics , and , between and .thus , there are two points and in such that there is no intersection of and between and . then , the mid points of and on each geodesic become intrinsic -means of the measure with two equal point masses on and .this implies that .let and be the largest values ( including ) such that every subset whose diameter is less than the value belongs to and , respectively .then , evidently from lemma [ lem : k1 - 1 ] , for .note that if is cat(0 ) , .in general , the following theorem holds .[ diam1 ] [ thm : diameter ] * if is cat( ) , . *if is cat( ) , .* if is a surface with a constant curvature , . the proof is given in appendix [ proof - diameter ] . by theorem [ thm : diameter](1 ) , , a lower curvature gives a wider area where the intrinsic -mean is unique . according to theorem [ thm : diameter](3 ) , this lower bound for is the best universal upper bound for any with cat( ) property . for , is bounded above by where is an increasing function of , as proved in appendix [ proof - gamma ] .this upper bound shows that the parameter plays a role in controlling the uniqueness of the mean , but it does not in euclidean space , where the -mean functions are always convex .combining the three deformations by , and , a novel frchet mean is proposed : where we now review three main areas : ( i ) cat(0 ) and cat(k ) aspects , ( ii ) uniqueness of means and ( iii ) robustness against outlying data values . in this discussionwe will concentrate on cases that arise from the euclidean graph , considered as a basic metric space from which we can construct a geodesic metric space using , or as the geodesic space for the metric cone extension using the approach .note that this is a restriction of a very general approach that would start with a density or a smooth empirical density ; first , construct the geodesic metric , and then , possibly apply the -cone method to that geodesic metric space .however we shall prefer the graph approach because it leads to tractable mathematics and much faster computation ; moreover , we are able to capture some of the structure of the data using the complete euclidean graph , the delaunay graph and the gabriel graph .as mentioned above , we restrict ourselves to the case where the base space is the euclidean graph , i.e. .it holds that as , all geodesics end up on the minimal spanning tree that is cat(0 ) .as increases at some point , it ceases to be cat(0 ) but will be cat(k ) for some , which depends on and indeed increases with . it should be stressed that the theorems on are completely general .that is they cover arbitrary metric cones based on a geodesic metric space .if we start with the euclidean graph as our geodesic space , it may not be cat(0 ) , but it can be shown that it is a cat(k ) space for some and will eventually be cat(0 ) for sufficiently small .while the space is cat(0 ) the intrinsic mean is unique and lies on the minimal spanning tree . for larger values of , there may not be a unique intrinsic mean lying on the geodesic graph , but theorem [ thm : diameter ] suggests that the mean may be unique for values of the diameter that are sufficiently small relative to . for , at and , we obtain the euclidean intrinsic mean and the median , respectively , and these , again , may not be unique . as noted above , the metric cone embedding the original data space becomes cat(0 ) for sufficiently small sufficiently small .the following is a possible strategy : starting with the euclidean graph , apply the cone method decreasing until the intrinsic mean is unique ( this may happen at a larger value of than that required to be cat(0 ) ) .then , project that unique mean back onto the original graph ; the projection is unique by construction provided that the intrinsic mean is not at the apex of the cone .instead of judging this method now we will wait for some empirical studies . in the , instead ofthe projection of the intrinsic mean in the cone back to the original geodesic space , we prefer the extrinsic mean in which the mean lies in the geodesic space but , but we use the cone metric .as we decrease we tend to get more local minima .this is the antithesis of obtaining a cat(0 ) space .because the metric becomes more concave as decreases for very small , each data point will yield a single `` local mean '' .for reasonable values of , each data cluster will have a local mean because the function is locally linear and intra - cluster distances are small .however , when the intra - cluster distances are larger , the concavity dominates , and we obtain multiple local minima .both larger values of and smaller values of make the function less convex and can lead to multiplicity of the local means .figures [ fig : alpha - beta - difference ] shows the difference between the effects caused by positive and finite . compared with the original geodesic distance ( figure [ fig : alpha - beta - difference ] ( a ) ) , both ( b ) and ( c ) have multiple local minima of the function . in ( b ) , the three small groups of samples in the top right corner are dealt with one cluster and have a unique local minimum in their midst .in ( c ) , each of the three small bunches has a local minimum .however , ( c ) tends to have local minima on a ridge of each cluster . by tuning both and as in ( d ) , we can get local minima at the centre of each cluster . for 24 samples each from normal distributions with the s.d . and 8 samples each from normal distributions with the s.d .initial graph : delaunay .the value for each is represented by the colour : red ( small ) , blue ( large ) , and minimum by a square.,title="fig:",width=207 ] + ( a ) for 24 samples each from normal distributions with the s.d . and 8 samples each from normal distributions with the s.d .initial graph : delaunay .the value for each is represented by the colour : red ( small ) , blue ( large ) , and minimum by a square.,title="fig:",width=207 ] + ( b ) + for 24 samples each from normal distributions with the s.d . and 8 samples each from normal distributions with the s.d .initial graph : delaunay .the value for each is represented by the colour : red ( small ) , blue ( large ) , and minimum by a square.,title="fig:",width=207 ] + ( c ) for 24 samples each from normal distributions with the s.d . and 8 samples each from normal distributions with the s.d .initial graph : delaunay .the value for each is represented by the colour : red ( small ) , blue ( large ) , and minimum by a square.,title="fig:",width=207 ] + ( d ) at the beginning of section [ onedim ] , we showed how the univariate median can be considered as an intrinsic mean with respect to a geodesic metric , in both the population case and the empirical case . because the univariate median is well known to be robust against outliers , we can expect means based on empirical geodesics to also be robust . in particular , the flattening of large distances when is small gives an indication as to where to look for robust means .this flattening idea is not new .huber introduced a function similar to our function , which is quadratic in a spherical neighbourhood of zero and flat outside .if the generalised mean is computed as a frchet mean , we should take the square root of the huber loss function .various generalisations , including a smoothed version of huber loss function , have been proposed ; see , for example , .figure [ fig : robustness ] shows a geodesic graph and the value of for 19 samples from n(0,1 ) and one outlier at .figure [ fig : robustness ] ( a ) uses ordinary geodesic distance and the sample closest to the outlier sample attains the minimum of .all of for ( b ) , for ( c ) and for ( d ) attain the minimum of around the mean of the normal distribution and we can see their robustness .it is interesting that , and can control the trade - off between uniqueness of the mean and robustness of the estimation . in euclidean spacethis trade - off is typically studied via the convexities of the loss function , whereas we are making the perhaps novel suggestion that the curvature is an appropriate vehicle . for 19 samples from n(0,1 ) and one outlier at ( far above the figure region ) .the value for each sample point is represented by the colours red ( small ) and blue ( large ) and the minimum is represented by a square.,title="fig:",width=207 ] ( a ) for 19 samples from n(0,1 ) and one outlier at ( far above the figure region ) .the value for each sample point is represented by the colours red ( small ) and blue ( large ) and the minimum is represented by a square.,title="fig:",width=207 ] ( b ) + for 19 samples from n(0,1 ) and one outlier at ( far above the figure region ) .the value for each sample point is represented by the colours red ( small ) and blue ( large ) and the minimum is represented by a square.,title="fig:",width=207 ] ( c ) for 19 samples from n(0,1 ) and one outlier at ( far above the figure region ) .the value for each sample point is represented by the colours red ( small ) and blue ( large ) and the minimum is represented by a square.,title="fig:",width=207 ] ( d ) this section , we suggest how to select and empirically from the data .first , assume that we have euclidean data ( equivalnet to and recall the basic effect of decreasing from to . at , we make no change to the metric .as decreases , we lose edges from the geodesic graph .that is to say from time to time , an edge that is in a particular geodesic is discarded and every geodesic that passes through that edge then has to use an alternative route .let us assume that at ( and under mild extra conditions ) , only a single edge is removed and let be its length .let be the lengths of the edges on the new geodesic that will replace the removed edge .in addition , let there be distinct geodesics that use .it is straightforward to see that _ all _ geodesics that use will use the new arc for an interval , for sufficiently small .the total change in geodesic length is and it is continuous at the current but the first derivative changes : is typically not zero . to see this ,take the case where all the are equal .then , the change in the first derivative is in graph theory , the number of geodesics using a particular edge , in our case , is sometimes called the _ edge betweenness_.we might therefore refer to the term as the weighted betweenness .this quantity measures changes in the configuration : if and are large then a long edge with large betweenness is removed , and it is replaced by shorter edges from the current geodesic graph . if is the betweenness of an edge , the total betweenness of a graph is the sum of all the individual edge betweennesses , and the weighted version is which except for a scalar factor is the variance given by , in this paper .we shall in fact favour the use of ( , and with the above discussion in mind , we will see in examples 1 and 2 that plots of the second derivative of do indeed have pronounced peaks and there is some matching of the -values at the peaks with the analogous differential of the aggregate betweenness .section 4.1 and figure 4 are important for understanding the metric .we can summarise the material in a way that will indicate how to estimate .the first point is that provides a metric cone . in one dimension, we wrap the real line around a circle and attach the origin .then , the the metric cone is based on the euclidean metric _ inside the cone_. the enlarged space ( referred to as the embedding space ) is cat(0 ) with respect to this metric .we claim that this construction is fundamental because even in larger spaces , the geodesics are one - dimensional .every geodesic , in some sense , has its private geodesic , but with a common vertex .moreover , by theorem [ cone ] , if the base space is cat(0 ) , the embedding space is cat(0 ) , and in both cases , we have a unique intrinsic mean and our statistics are well defined .howeve , r if we compute the intrinsic mean restricted to the base space , e.g. euclidean space , then the uniqueness no longer holds .as stated above , the space may not be cat(0 ) for small but may become more so for large .we can use this to our advantage : for sufficiently large , we expect a single maximum but multiple maxima for smaller , as shown in figure [ fig:1dim_beta ] .if we recall that the value of the function for a given is helpful in clustering , we can suggest a number of plots to show the local minima .however , we can say more .first , note that in one dimension , over ] and typically consider the range ], $ ] .thus , and .note that for , for is a convex of . for a , for , is a convex of iff since .this means that if for , is a convex of .if is cat( ) and has a diameter of at most , there is a comparison triangle on a sphere of radius such that its perimeter is at most and is a convex of for each because of the argument above after scaling by .\(3 ) we show an example of the probability measure with a three - point support on such that the diameter is larger than but can be arbitrarily close to and the uniqueness of the intrinsic -mean fails .take , , and with , as in figure [ fig : cat_hemisphere1 ] .let be the mid point of for .put the point masses at and at and , and assume that there is a unique intrinsic median . by the symmetry , must be on the arc , and if we change the ratio , moves continuously on .thus , we can set by tuning adequately .however , -dispersion from becomes , and -dispersion from becomes .this contradicts the assumption of being the unique -intrinsic mean . since we can set and as arbitrarily small positive numbers , , by ( 1 ) , ; thus , .if is a surface with a constant curvature , where is the inverse function of ^{-1 } & \mbox{~for~ } 0 \leq \theta \leq \pi/6,\\ \left[\log_2\frac{\pi-2\theta}{\arccos(\sin^2\theta)}\right]^{-1 } & \mbox{~for~ } \pi/6 < \theta \leq \pi/2 \end{array } \right.\ ] ] for and for .c1 : , , + where , as shown in figure [ fig : cat_hemisphere1 ] .this satisfies .+ c2 : , , and , as shown in figure [ fig : cat_hemisphere2 ] .+ we put point masses at and at and . for ,we consider c1 . as in the proof of theorem [ thm : diameter](3 ) , we can set .let and denote -dispersion from and , respectively .then , therefore , is equivalent to bose , p. devroye , l. , lffler , m. , snoeyink , j. and verma , v. , almost all delaunay triangulations have stretch factor greater than /2 , _ computational geometry _ , vol.44 , no.2 , pp . 121127 , 2011 .matula , d. w. and robert r. s. , properties of gabriel graphs relevant to geographic variation research and the clustering of points in the plane , _ geographical analysis _ ,vol.12 , no.3 pp . 205222 , 1980 . | a methodology is developed for data analysis based on empirically constructed geodesic metric spaces . for a probability distribution , the length along a path between two points can be defined as the amount of probability mass accumulated along the path . the geodesic , then , is the shortest such path and defines a geodesic metric . such metrics are transformed in a number of ways to produce parametrised families of geodesic metric spaces , empirical versions of which allow computation of intrinsic means and associated measures of dispersion . these reveal properties of the data , based on geometry , such as those that are difficult to see from the raw euclidean distances . examples include clustering and classification . for certain parameter ranges , the spaces become cat(0 ) spaces and the intrinsic means are unique . in one case , a minimal spanning tree of a graph based on the data becomes cat(0 ) . in another , a so - called metric cone " construction allows extension to cat(k ) spaces . it is shown how to empirically tune the parameters of the metrics , making it possible to apply them to a number of real cases . |
genetic regulatory networks had been modeled using discrete and continuous mathematical models .an important contribution to the simulation science is the theory of sequential dynamical systems ( sds ) . in these paper , the authors developed a new theory about the sequential aspect of the entities in a dynamical systems .in particular laubenbacher and pareigis created an elegant mathematical background of the sds , and with it solve several aspects of the theory and applications .probabilistic boolean networks ( pbn ) had been recently introduced , to model regulatory gene networks .the pbn are a generalization of the widely used boolean network model ( bn ) proposed by kauffman ( 1969 ) .while the pbn eliminate one of the main limitations of the bn model , namely its inherent determinism , they do not provide the framework for considering the sequential behavior of the genes in the network , behavior observed by the biologists . here , we introduce the probabilistic structure in sds , using for each vertices of the support graph a set of local functions , and more than one schedule in the sequence of the local function selected to form the update functions , obtaining a new concept : probabilistic sequential dynamical system ( pss ) . the notion of simulation of a pssis introduced in section [ ehom ] using the concept of homomorphism of pss ; and we prove that the category of * sds * is a full subcategory of the of the category * pss*. several examples of homomorphisms , subsystems and simulations are given .on the other hand , deterministic sequential dynamical systems have been studied for the last few years .the introduction of a probabilistic structure on sequential dynamical systems is an important and interesting problem .our approach take into account , a number of issues that have already been recognized and solved for sequential dynamical systems .this section is an introduction with the definitions and results of sequential dynamical system introduced by laubenbacher and pareigis . herewe use sds over a finite field . in this paper , we denote the finite field by , where is a prime number . a sequential dynamical system ( ) over a finite field consists of 1 . a finite graph with vertices , and a set of edges .2 . a family of local functions , that is where depends only of those variables which are connected to in .3 . a permutation in the set of vertices , called an update schedule ( i.e. a bijective map .the global update function of the is .the function defines the dynamical behavior of the and determines a finite directed graph with vertex set and directed edges , for all , called the state space of .the definition of homomorphism between two uses the fact that the vertices of an and the states together with their evaluation map form a contravariant setup , so that morphisms between such structures should be defined contravariantly , i.e. by a pair of certain maps or by a pair , with the graph having vertices . herewe use a notation slightly different the one using in .let and be two .let be a digraph morphism , and be a family of maps in the category of * set*. the map is an adjoint map , because is defined as follows : consider the pairing and similarly the induced adjoint map is .then and induce the adjoint map defined as follows : 1 . then is an homomorphism of if for a set of orders associated to in the connected components of , the map holds the following condition and the commutative diagram . where and .if , then , and the commutative diagram is the following . 1 . for examples and properties see .it is clear that the above diagrams implies the following one 1 .the following definition give us the possibility to have several update functions acting in a sequential manner with assigned probabilities .all these , permit us to study the dynamic of these systems using markov chains and other probability tools .we will use the acronym pss ( or ) for plural as well as singular instances .[ pss ] a probabilistic sequential dynamical system ( pss ) over consists of : 1 . a finite graph with vertices ; 2 .a set of local functions for each vertex of .( i. e. a bijection map latexmath:[ ] , we conclude that * for all possible and in .so , for all real number there exists , such that , for all natural number , and for all possible .in fact , using notation of equation ( [ mt].2 ) , we have , and this implies .similarly , so , selecting such that , we obtain where is the maximum number of functions going from one state to another in the state space of the power of the functions .therefore for all possible and in , and the theorem holds if and are bijective functions , and the condition ( [ hompss].3 ) holds , but the probabilities are not equal , we will say that , and are -equivalent , and we write .so , , and are -equivalent if there exist , and , such that for all and , we have .consider defined as follows : if is the set of functions in selected in ( [ hompss].2 ) for the map then the new probability of is the set of functions in , together with the new probabilities defined above , form a new pss , that we will call image of .so , the graph , the update functions determine de local functions associate to each vertex of , similarly the permutations using by these functions , and finally the new probabilities assigned .we will say an injective monomorphism is a pss - homomorphism such that is surjective and the set of functions , for all are injective functions , and so , .therefore , we will say that a pss is sub probabilistic sequential system of if there exists an injective monomorphism from to .( [ defh].1 ) let be a pss .the pair of functions is the _ identity homomorphism _ , and it is an example of an isomorphism .( [ defh].2 ) an homomorphism of pss is an _ injective monomorphism _ if is surjective and is injective , for example see ( [ ehom].3 ) .similarly we will say that an homomorphism is a _ surjective epimorphism _ if is injective and is surjective , for a complete description of the properties of this class of monomorphism and epimorphism see section 7 in .we consider that the pss is simulated by if there exist a injective monomorphisms or a surjective epimorphism .in this section we give several examples of pss - homomorphism , and simulation . in the second examplewe show how the condition ( [ hompss].2 ) is verified under the supposition that a function is defined .so , we have two examples in ( [ ehom].2 ) , one with the natural inclusion , and the second with the only possibility of a surjective map . in the last example we have a complete description of two pss , where we have only one permutation and two or less functions for each vertices in the graph .in particular this homomorphism is an injective monomorphism , so is an example of simulation too . *( [ ehom].1 ) * for the pss , in the examples [ exam1 ] , and [ ma ] we now define the natural inclusion . it is clear that the inclusion satisfies the two first condition to be a homomorphism , and the third one is a simple consequence of the theorem [ c3 ] . in fact and . here is simulated by * ( [ ehom].2 ) * consider the two graphs below suppose that the functions associated to the vertices are the families , for and for .the permutations are , and , , so , , and .then , we have constructed two pss , each one with two permutations and only one function associated to each vertex in the graph ; denoted by : _ case ( a ) _ we assume that there exists a homomorphism from to , with the graph morphism is given by .suppose the functions are giving , and the adjoint function is defined too .if is an homomorphism , which satisfies the definition ( [ hompss ] ) , then the following diagrams commute : _ case ( b ) _ consider now the map , defined by , , , and . if there exists an homomorphism that satisfies ( [ hompss].2 , then * ( [ ehom].3 ) * we now construct a pss - homomorphism from to , with the property that is surjective and the functions are injective , that we call a injective monomorphism .the pss has a support graph with three vertices , and the pss has a support graph with four vertices the homomorphism , has the contravariant graph morphism , defined by the arrows of graphs , as follows , and .the family of functions , ; the adjoint function is the first condition in the definition [ hompss ] holds . the pss with data of functions and one permutation or schedule and probabilities , so ; where the two update functions are the pss is a pss , with the following data : the families of functions : ; , ; and . one schedule , the eight possible update functions , and its probabilities whose determine probabilities , are the following : we claim is a homomorphism .we will prove that the following diagrams commute . in fact , on the other hand , we verify the composition of functions as follows , + , + + similarly we check the condition for and .the third condition holds , because the initial , because .in fact : and to describe the connection between the state spaces , see figure 2 .in this section , we prove that the pss with the _ homomorphisms _ form a category , that we denote by * pss*. in theorem [ fsds ] , we prove that the category of sequential dynamical systems * sds * is a full subcategory of * pss*. the composite of two graph morphisms is obviously again a graph morphism .the composite is again a digraph morphism which satisfies the conditions ( [ hompss].1 ) , and ( [ hompss].2 ) .in fact , using the proposition and definition 2.7 in , we can check these condition and conclude that the third condition holds , too by theorem [ ma ] .so , is again a pss - homomorphism .let , and be two pss over the finite field . for all -homomorphisms and , then there exists a morphism such that the following diagram commutes let be a pss over the finite field .let be the set of all update functions that we can construct with the local functions and the permutation in .let us consider , the free abelian group generated by , that we denote by , then .we can notice that we are working over a finite field with characteristic the prime number , so for all .we will take the quotients of these groups by , that is , and these groups are finite , . denoting , for an abelian group ,we rewrite the above relation by and , there exists a covariant functor from the category * pss * to the category of small abelian groups with morphism of such , , defined as follows . 1 .the object function is defined by .the arrow function which assigns to each homomorphism in the category * pss * an homomorphism of abelian groups which is defined in a natural way , because , where , and , then is a functor , in fact , the functor gives the possibility to work with pss using group theory , for example , because , and we assign probabilities to the set in some way , and we consider that all possible different assignations are -isomorphic . a complement of the pss is the pss , and all of the complement are -equivalent , so we can select a distribution of probabilities for the complement having in account particular applications .we use the definition of complement in order to define a decomposition of a pss in two sub pss , only looking the set of functions .one of the mean problem in modeling dynamical systems is the computational aspect of the number of functions and the calculation of steady states in the state space , in particular the reduction of number of functions is one of the most important problem to solve for determine which part of the network _state space _ could be simplify . | in this paper we introduce the idea of probability in the definition of sequential dynamical systems , thus obtaining a new concept , probabilistic sequential system . the introduction of a probabilistic structure on sequential dynamical systems is an important and interesting problem . the notion of homomorphism of our new model , is a natural extension of homomorphism of sequential dynamical systems introduced and developed by laubenbacher and paregeis in several papers . our model , give the possibility to describe the dynamic of the systems using markov chains and all the advantage of stochastic theory . the notion of simulation is introduced using the concept of homomorphisms , as usual . several examples of homomorphisms , subsystems and simulations are given . [ multiblock footnote omitted ] |
systems operating far from thermodynamic equilibrium are usually subjected to the action of noise which may profoundly influence their performance . over the past two decades the extensive theoretical and experimental studies in physics ,information theory and biology have documented various phenomena of noise - induced order , noise - facilitated kinetics or noise - improved signal transmission and detection .almost all research in this field assumes that the noise process involved can be characterized by finite variance . yet , in many situations the external non - thermal noise can be described by distribution of impulses following the heavy - tail stable law statistics of infinite variance .notably , infinite variance does not exclude finite spread or dispersion of the distribution around its modal value .in fact , for the family of -stable noises the interquantile distance can be used as a proper measure of the distribution width around the median .dynamical description of the system and its surroundings is typically carried out within a stochastic picture based on langevin equations .the basic equation of this type reads where is the deterministic ( and possibly time - dependent ) `` force '' acting on the system and stands for the `` noise '' contribution describing interaction between the system and its complex surrounding .if the noise can be considered as white and gaussian , the above equation gives rise to the classical langevin approach used in the analysis of brownian motion .the whiteness of the noise ( lack of temporal correlations ) corresponds to the existence of time - scale separation between the dynamics of a relevant variable of interest and the typical time scale of the noise .hence white noise can be considered as a standard stochastic process that describes in the simplest fashion the effects of `` fast '' surroundings .although in various phenomena , the noise can be indeed interpreted as white ( i.e. with stationary , independent increments ) , the assumption about its gaussianity can be easily violated .the examples range from the description of the dynamics in plasmas , diffusion in energy space , self - diffusion in micelle systems , exciton and charge transport in polymers under conformational motion and incoherent atomic radiation trapping to the spectral analysis of paleoclimatic or economic data , motion in optimal search strategies among randomly distributed target sites , fluorophore diffusion as studied in photo - bleaching experiments , interstellar scintillations , ratcheting devices and many others .the present work overviews properties of lvy flights in external potentials with a focus on astonishing aspects of noise - induced phenomena like resonant activation ( ra ) , stochastic resonance ( sr ) and dynamic hysteresis . in particular , the persistency of the sr occurrence is examined within a continuous and a two - state description of the generic system composed of a test particle moving in the double well - potential and subject to the action of deterministic , periodic perturbations and -stable lvy type noises . in the same system the appearance of dynamic hysteresis is documented .moreover , an archetypal escape problem over a fluctuating barrier is analyzed revealing the ra phenomenon in the presence of non - equilibrium lvy type bath .the model system is described by the following ( overdamped ) langevin equation driven by a lvy stable , white noise to examine occurrence of stochastic resonance and dynamic hysteresis , the generic double - well potential with the periodic perturbation ( see left panel of fig .[ fig : model ] ) has been chosen with .with for ( the left panel ) and the generic linear potential slope dichotomously switching between two configurations characterized by differend heights ( right panel).,width=264 ] in turn , for inspection of resonant activation , the action of the potential has been approximated by a linear slope switching dichotomously between two distinct configurations ( see right panel of fig . [fig : model ] ) .the detection of ra has been carried for a relatively `` high '' barrier , which under the normal diffusion condition guarantees the proper separation of time scales ( the actual times of escape events and time of diffusive motion within the potential well ) of the process .following former studies on ra , the lower barrier height has been chosen to take one of two values or .a particle has been assumed to start its motion at the reflecting boundary and continued as long as the position fulfilled .it should be stressed that with generally non - gaussian white noise the knowledge of the boundary location alone can not specify in full the corresponding boundary conditions for reflection or absorption , respectively .the trajectories driven by non - gaussian white noise display irregular , discontinuous jumps . as a consequence , the location of the boundary itself is not hit by the majority of discontinuous trajectories .this implies that regimes beyond the location of the boundaries must be properly accounted for when setting up the boundary conditions .in particular , multiple recrossings of the boundary location from excursions beyond the specified state space have to be excluded .as it has been demonstrated elsewhere , incorporation of lvy flights into kinetic description requires use of nonlocal boundary conditions which , for the case of absorbing boundary at , say , calls for extension of the absorbing regime to the semiline beyond that point , . the barrier fluctuations causing alternating switching between the high ( ) and low ( ) barrier configurations have been approximated by the markovian dichotomous noise with the exponential autocorrelation function [ \eta(t')-\langle \eta(t')\rangle ] \rangle = \frac{1}{4}(h_+-h_-)^2\exp(-2\gamma |t - t'|) ] denotes the stability index , yielding the asymptotic long tail power law for the -distribution , which for is of the type .the parameter ( ) characterizes the scale whereas ( ] .the particle is moving in the modulated double - well potential ( [ eq : potential ] ) with .,width=264 ] the overall behavior of the dynamic hysteresis loops under the -stable noises could be deduced from the inspection of the trajectories of the process presented in fig .[ fig : strajectory ] . for the symmetric stable noise, i.e. , the process spends , on average , the same amount of time in the left / right states and consequently , occupations of both states are equal . for non - zero and increasing a larger asymmetry in the distribution of residence times in the right / left states is registered .the asymmetry of the occupation probability in either one of the states is reflected in the shape of the dynamic hysteresis loop , which with an increasing becomes distorted into the direction determined by the sign of the skewness parameter .the probability of finding the process in the right / left state depends strongly on the stability index , cf .[ fig : sh_alpha ] . for symmetric noises ( )the area of the hysteresis loop decreases with decreasing stability index , see fig .[ fig : sh_alpha ] .this observation is a direct consequence of a heavy - tailed nature of the noise term in eq .( [ eq : generallangevin ] ) . with decreasing , larger excursions of the particle are possible andthese occasional jumps of trajectories may be of the order of , or even larger than the distance separating the two minima of the potential .a combined interplay of both noise parameters may result in a permanent locking of the process in one of its states , see right panel of fig .[ fig : sh_alpha ] . in order to quantify the sr phenomenon we have used the standard measures of the signal - to - noise ratio snr and the spectral power amplification .the snr is defined as a ratio of a spectral content of the signal in the forced system to the spectral content of the noise : /s_n(\omega).\ ] ] here represents the power carried by the signal , while estimates the background noise level . in turn , the spectral power amplification ( ) is given by the ratio of the power of the driven oscillations to that of the driving signal at the driving frequency .closer examination of these quantifiers reveals that and snr behave in a typical way both in the continuous model , cf .[ fig : genericpower ] , and in its two - state analogue ( results not shown ) .the spectral amplification has a characteristic bell - shape form indicating detection of stochastic resonance within the interval of suitably chosen noise intensity .for example , if the stability index is set to and trajectories are simulated with extremely weak noise intensities , the periodic signal is not well separated from the noisy background and consequently snr stays negative .however , snr becomes positive with increasing noise intensity what indicates emerging separation of the signal from the noisy background .[ fig : genericpower ] results for and snr analysis ( in a continuous sr model ) with various stability index are presented .note , that for a given the power spectra for are the same .therefore , stochastic resonance quantifiers derived from the power spectra are equivalent for with the same value of . with ( the left panel ) and ( the right panel ) .parameters of the simulation like in fig .[ fig : sh_a19].,width=302 ] -stable process for the two - state and the continuous model constructed by use of eq .( [ eq : langevin ] ) .the time step of the integration , the scale parameter .the frequency of the cycling voltage .noise parameters as indicated in figures.,width=264 ] switching from the continuous model to the two - state approximation ( cf .fig.5 ) does not change the qualitative behavior of the sr quantifiers . with large ( ) ] however , the slope of the decaying part of the signal - to - noise ratio is more flat for the two - state description than for the continuous model ( results not shown ) .our numerical analysis implies that is more sensitive to the variation of noise parameters than snr , see fig .[ fig : genericpower ] . at decreasing values of the stability index ,the maximum of drops and shifts towards higher values of the noise intensity .obviously , the decrease of weakens the stochastic resonance and reduces system performance .the diminishment of spectral amplification for indicates that the input signal is worse reproduced in the recorded output .this behavior is easier detectable for symmetric noises ( cf .left panel of fig .[ fig : genericpower ] ) than for asymmetric ones ( see the right panel ) and can be readily explained by the analysis of exemplary trajectories , see fig .[ fig : strajectory ] . sharp spikes clearly visible in the right corner panel of fig .[ fig : strajectory ] are due to the heavy - tailed nature of the lvy stable distribution and they are more pronounced the smaller the index becomes .their presence indicates that for a sufficiently small the trajectory becomes discontinuous and , on average , switches between left and right wells of the potential are realized by sudden long - jump escape events fairly independent of the periodic driving .although the snr is less sensitive to variations in noise parameters ( cf . fig .[ fig : genericpower ] ) , the shape of this function flattens for decreasing values of thus hampering detection of the resonant value of the noise optimal intensity for which the sr phenomenon is most likely perceived .( the left panel ) , ( the right panel ) and various .the simulation parameters like in fig .[ fig : strajectory ] . lines are drawn to guide the eye.,width=302 ] as already discussed in the preceding sections , the stochastic kinetics driven by additive non - gaussian stable white noises is very different from the gaussian case . for , a test particle moving in the linear potential can change its position via extremely long ,jump like excursions .this in turn requires the use of nonlocal boundary conditions in evaluation of the mean first passage time ( mfpt ) . in this paragraphthe above issue is taken care of when generating first passage times ( fpts ) by monte carlo simulations .simulated trajectories are representative for a motion of a test particle over the interval $ ] ( see right panel of fig .[ fig : model ] ) influenced by independent dichotomous switching of the potential slope and subjected to additive white lvy noise .a particle starts its motion at where the reflecting boundary is located .the absorbing boundary is located at meaning , that the whole semi - axis is assumed absorbing , yielding zero pdf for all . from the ensembles of collected first passage timeswe have evaluated the mean values of the distributions and our findings for mfpt are presented in fig .[ fig : skewedm88 ] .integration of eq .( [ eq : langevin ] ) was performed for a series of descending time steps of integration to ensure that the results are self consistent .the left panel of fig .[ fig : skewedm88 ] displays behavior of derived mfpts as a function of the stability index and the rate of the barrier modulation . in the right panel sample cross - sections of the surfaces mfpt( )are drawn . for , or adequately ,the procedure of simulating skewed ( ) stable random variables becomes unstable .this can be well explained by examining the form of the additive noise - term characteristic function - see eqs .( [ eq : charakt ] ) and ( [ eq : charakter ] ) .the exponential functions are no longer continuous functions of the parameters and exhibit discontinuities when .therefore , for the clarity of presentation , these parameter sets have been excluded from consideration .for with ( the left panel ) and sample cross - sections for various : ` ' : ; ` ' : ; ` ' : ; ` ' : and ` ' : ( the right panel ) .the results were calculated by direct integration of eq .( [ eq : langevin ] ) with the time step and averaged over realizations .horizontal lines represent asymptotic values of and .they have been evaluated by use of the monte carlo method with and averaged over realizations .error bars represent standard deviation of the mean.,width=302 ] the depicted results indicate appearance of the ra phenomenon which is best visible when the potential barrier is switching between two barrier configurations characterized by .however , the dependence of ra on noise parameters is highly nontrivial .the overall tendency in kinetics is similar to cases described in former paragraphs : heavier tails ( ) in distribution of noise increments result in stronger discontinuities of trajectories causing the ra effect to become inaudible and fading gradually .notably , for totally skewed additive noise which acts in favor of the motion to the right , the ra seems to disappear for .it reappears again for smaller values of , when the driving lvy white noise becomes a one sided lvy process with strictly positive increments which tend to push trajectories to the right . to further examine the character and distribution of escape events from , we have analyzed survival probabilities constructed from generated trajectories at fixed values of frequencies . for better comparison with the gaussian ra scenario , the summary of results is displayed in fig .[ fig : survm88 ] with sets of data relating to and . the left and right panels of fig .[ fig : survm88 ] present behavior of mfpt as the function of the barrier modulation parameter , the survival probability and exemplary cross - sections of the survival probability surface .the insets depict behavior of the survival probability at short time scales ( small ) .( left panel ) and ( right panel ) for .mfpts curves ( upper panel ) , survival probability surfaces ( middle panel ) and sample cross - sections of surface ( lower panel ) for small ( ) , resonant ( ) and large ( ) -values are depicted .simulations parameters as in fig .[ fig : skewedm88 ] .note the log scale on z - axis ( middle panel ) and y - axis ( lower panel).,width=302 ] at low rates of the barrier switching process ( small ) , two distinct time scales of the barrier crossing events can be observed .the fast time scale corresponds to passages over the barrier in its lower state , while the large time scale is pertinent to the slower process , i.e. the passages over the barrier in its higher energetic state .this effect is well pronounced after the ra phenomenon sets up .the presence of the two time scales for small and only one time scale for large explains the asymptotic behavior of mfpts .namely , for low values of the switching rate , mfpt is an average value of mfpts over both configuration of the barrier . with increasing frequency the distinguishable time scales coalesce and disappearthis is due to the fact that the barrier changes its height multiple times during the particle s motion so that resulting mfpt describes kinetics over the average potential barrier .we have examined the influence of various types of stochastic -stable drivings on dynamic properties of a generic two - state model system .although our primary interest was to understand the effects of lvy - type drivings with the stability index , the numerical analysis as applied in these studies remains valid for any set of parameters characterizing -stable excitations .the response to external or / and parametric perturbations has been examined by analyzing qualitative changes in system s dynamic behavior as expressed in the onset of resonant activation , stochastic resonance and dynamic hysteresis . due to the inherent symmetry of the potential ,the sr quantifiers constructed at a given value of the stability index have been the same for .on the other hand , the asymmetry of the driving noise has been shown to influence strongly the population of states and therefore has affected mostly the appearance and performance of the dynamic hysteresis .the system efficiency has been described by standard sr measures , i.e. signal - to - noise ratio ( snr ) and spectral power amplification ( ) .those quantifiers have been shown to behave in a typical way .in other words , by tuning the noise intensity , the maximum in the snr( ) and could be detected , thus proving that stochastic resonance is a robust phenomenon which may be observed also in systems subjected to the action of impulsive jump - noise stochastic processes .decrease in stability index results in larger jump - like excursions of the process and consequently causes weakening of sr .out of two sr measures , the spectral power amplification has turned out to be more sensitive to variations of than snr . a more pronounced drop in system efficiencyhas been observed for symmetric noises with .the nonequilibrated , non - thermal lvy white noise affects also a paradigm scenario of escape kinetics . by numerically implementing the set up boundary conditions for the problem, we have investigated the statistics of escape times over the dichotomously fluctuating barrier .the manifestation of the ra phenomenon has been analyzed within a certain frequency regime pointing that by a continuous readjustment of the external noise paramaters the resonant activation can be either suppressed or re - induced .lvy white noise with extends a standard brownian noise to a vast family of impulsive jump - like stochastic processes .our studies document that dynamical systems driven by such sources can also benefit and display a noise - enhanced order .the research has been supported by the marie curie tok cocos grant ( 6th eu framework program under contract no .mtkd - ct-2004 - 517186 ) and european science foundation ( esf ) via ` stochastic dynamics : fundamentals and applications ' ( stochdyn ) program .additionally , bd acknowledges the support from the foundation for polish science . | a standard approach to analysis of noise - induced effects in stochastic dynamics assumes a gaussian character of the noise term describing interaction of the analyzed system with its complex surroundings . an additional assumption about the existence of timescale separation between the dynamics of the measured observable and the typical timescale of the noise allows external fluctuations to be modeled as temporally uncorrelated and therefore white . however , in many natural phenomena the assumptions concerning the abovementioned properties of `` gaussianity '' and `` whiteness '' of the noise can be violated . in this context , in contrast to the spatiotemporal coupling characterizing general forms of non - markovian or semi - markovian lvy walks , so called lvy flights correspond to the class of markov processes which still can be interpreted as white , but distributed according to a more general , infinitely divisible , stable and non - gaussian law . lvy noise - driven non - equilibrium systems are known to manifest interesting physical properties and have been addressed in various scenarios of physical transport exhibiting a superdiffusive behavior . here we present a brief overview of our recent investigations aimed to understand features of stochastic dynamics under the influence of lvy white noise perturbations . we find that the archetypal phenomena of noise - induced ordering are robust and can be detected also in systems driven by non - gaussian , heavy - tailed fluctuations with infinite variance . |
ebola virus is currently affecting several african countries , mainly guinea , sierra leone , and liberia .ebola was first discovered in 1976 in the democratic republic of the congo near the ebola river , where the disease takes its name . since then, ebola outbreaks have appeared sporadically in africa .the virus , previously known as ebola haemorrhagic fever , is the deadliest pathogens for humans .the early signs and symptoms of the virus include a sudden onset of fever and intense weakness and headache . over time , symptoms become increasingly severe and include diarrhoea , raised rash , internal and external bleed in ( from nose , mouth , eyes and anus ) . as the virus spreads through the body , it damages the immune system and organs .ebola virus is transmitted to an initial human by contact with an infected animal s body fluid . on the other hand, human - to - human transmission can take place with direct contact ( through broken skin or mucous membranes in , for example , the eyes , nose , or mouth ) with blood or body fluids of a person who is sick with or has died from ebola .it is also transmitted indirectly via exposure to objects or environment contaminated with infected secretions .mathematical models are a powerful tool for investigating human infectious diseases , such as ebola , contributing to the understanding of the dynamics of the disease , providing useful predictions about the potential transmission of the disease and the effectiveness of possible control measures , which can provide valuable information for public health policy makers .epidemic models date back to the early twentieth century , to the 1927 work by kermack and mckendrick , whose model was used for modelling the plague and cholera epidemics .in fact , such epidemic models have provided the foundation for the best vaccination practices for influenza and small pox .currently , the simplest and most commonly implemented model in epidemiology is the sir model .the sir model consists of three compartments : susceptible individuals , infectious individuals , and recovered individuals . when analysing a new outbreak , the researchers usually start with the sir model to fit the available outbreak data and obtaining estimates for the parameters of the model .this has been the case for the modelling of the spreading mechanism of the ebola virus currently affecting several african countries . for more complex mathematical models , with more than three state variables ,see . in our previous works , we used parameters identified from the recent data of the world health organization ( who ) to describe the behaviour of the virus . herewe focus on the mathematical analysis of the early detection of the ebola virus . in section [ sec2 ], we briefly recall the analysis study of the sir model that we presented in our previous study of the description of the behaviour of ebola virus . in section [ sec3 ] ,we add to the basic model of section [ sec2 ] the demographic effects , in order to provide a description of the virus propagation closer to the reality .this gives answer to an open question posed in remark 1 of and at the end of .our aim in studying the model with vital dynamics is to provide useful predictions about the potential transmission of the virus .we also consider an induced death rate for the infected individuals .after numerical simulations , in section [ sec4 ] we control the propagation of the virus in order to minimize the number of infected individuals and the cost of vaccination .we end with section [ sec : conc ] of conclusions .in this section , we present and briefly discuss the properties of the system of equations corresponding to the basic sir ( susceptible infectious recovery ) model , which has recently been used in to describe the early detection of ebola virus in west africa . in the formulation of the basic sir model ,we assume that the population size is constant and any person who has completely recovered from the virus acquired permanent immunity .moreover , we assume that the virus has a negligibly short incubation period , so that an individual who contracts the virus becomes infective immediately afterwards .these assumptions enables us to divide the host population into three categories , * for susceptible : denotes individuals who are susceptible to catch the virus , and so might become infectious if exposed ; * for infectious : denotes infectious individuals who are able to spread the virus through contact with the susceptible category ; * for recovered : denotes individuals who have immunity to the infection , and consequently do not affect the transmission dynamics in any way when in contact with other individuals .the model is described mathematically by the following system of non - linear differential equations : \dfrac{di(t)}{dt } = \beta s(t)i(t)- \mu i(t),\\[0.2 cm ] \dfrac{dr(t)}{dt } = \mu i(t ) , \end{cases}\ ] ] where is the infection rate and is the recovered rate .the initial conditions are given by we can see that = 0 ] .then , the mathematical model with control is given by the following system of non - linear differential equations : \dfrac{di(t)}{dt } = \beta s(t)i(t ) - \mu i(t ) - ( \gamma + \gamma_i ) i(t),\\[0.2 cm ] \dfrac{dr(t)}{dt } = \mu i(t ) -\gamma r(t ) +u(t ) s(t ) . \end{cases}\ ] ]the goal of the strategy is to reduce the infected individuals and the cost of vaccination .precisely , the optimal control problem consists of minimizing the objective functional dt,\ ] ] where is the control variable , ] , days , and .[cntrlbd_u ] ] , in case of optimal control _ versus _ without control .[cntrlbd_n_induced ] ] in conclusion , one can say that figure [ fig6 ] shows the effectiveness of optimal vaccination in controlling ebola .figure [ cntrlbd_u ] gives a representation of the optimal control ; while figure [ cntrlbd_n_induced ] shows the evolution of the number of total population over time .we see that the total number of population is bigger in case of vaccination ( less people dying ) .mathematical modelling of the detection of a virulent virus such ebola is a powerful tool to understand the dynamics of the propagation of the virus in a population .the main aim is to provide useful predictions about the potential transmission of the virus .the important step after modelling is to study the properties of the system of equations that describes the propagation of the virus . in this work , we analysed a sir model with vital dynamics for the early detection of ebola virus in west africa , by adding demographic effects and an induced death rate , in order to discuss when the model makes sense mathematically and to study the information provided by the model .we simulated the model in the case of a basic reproduction number , which describes the current situation of ebola virus in guinea .we studied the equilibria .the system of equations of the model was solved numerically and the numerical simulations confirmed the theoretical analysis of the equilibria for the model .finally , we controlled the propagation of the virus by minimizing the number of infected individuals and the cost of vaccination and showing the importance of optimal control .this research was partially supported by the portuguese foundation for science and technology ( fct ) through project uid / mat/04106/2013 of the center for research and development in mathematics and applications ( cidma ) and within project toccata , ref .ptdc / eei - aut/2933/2014 .d. ariens , m. diehl , h. j. ferreau , b. houska , f. logist , r. quirynen and m. vukov , _ acado toolkit user s manual _ , optimization in engineering center ( optec ) and department of electrical engineering , ku leuven , 2014 .m. barry , f. a. traor , f. b. sako , d. o. kpamy , e. i. bah , m. poncin , s. keita , m. cisse and a. tour , _ ebola outbreak in conakry , guinea : epidemiological , clinical , and outcome features _ ,mdecine et maladies infectieuses * 44 * ( 2014 ) , no . 1112 , 491494 .l. borio et al .[ working group on civilian biodefense ; corporate author ] , _ hemorrhagic fever viruses as biological weapons : medical and public health management _ , journal of the american medical association * 287 * ( 2002 ) , no .18 , 23912405 .s. f. dowell , r. mukunu , t. g. ksiazek , a. s. khan , p. e. rollin and c. j. peters , _ transmission of ebola hemorrhagic fever : a study of risk factors in family members _ , kikwit , democratic republic of the congo , 1995 .commission de lutte contre les epidmies kikwit .j. infect .dis . * 179 * ( 1999 ) , suppl . 1 , s87s91 . h. r. joshi , s. lenhart , m. y. li and l.wang , _ optimal control methods applied to disease models _ , in _ mathematical studies on human disease dynamics _ , 187207 , contemp . math . , 410 , amer . math .soc . , providence , ri , 2006 .j. a. lewnard , m. l. ndeffo mbah , j. a. alfaro - murillo , f. l. altice , l. bawo , t. g. nyenswah and a. p. galvani , _ dynamics and control of ebola virus transmission in montserrado , liberia : a mathematical modelling analysis _ , the lancet infectious diseases * 14 * ( 2014 ) , no . 12 , 11891195 .a. rachah and d. f. m. torres , _ mathematical modelling , simulation and optimal control of the 2014 ebola outbreak in west africa _ , discrete dyn .* 2015 * ( 2015 ) , art .i d 842792 , 9 pp . | we present a mathematical analysis of the early detection of ebola virus . the propagation of the virus is analysed by using a susceptible , infected , recovered ( sir ) model . in order to provide useful predictions about the potential transmission of the virus , we analyse and simulate the sir model with vital dynamics , by adding demographic effects and an induced death rate . then , we compute the equilibria of the model . the numerical simulations confirm the theoretical analysis . our study describes the 2015 detection of ebola virus in guinea , the parameters of the model being identified from the world health organization data . finally , we consider an optimal control problem of the propagation of the ebola virus , minimizing the number of infected individuals while taking into account the cost of vaccination . |
the crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world . today , over 1.23 million people in the united states are members of a _ street gang _ , which is a coalition of peers , united by mutual interests , with identifiable leadership and internal organization , who act collectively to conduct illegal activity and to control a territory , facility , or enterprise .they promote criminal activities such as drug trafficking , assault , robbery , and threatening or intimidating a neighborhood .moreover , data from the centers for disease control in the united states suggests that the victims of at least 1.3% of all gang - related homicides are merely innocent bystanders who live in gang occupied neighborhoods .street gang members have established online presences coinciding with their physical occupation of neighborhoods .the national gang threat assessment report confirms that at least tens of thousands of gang members are using social networking websites such as twitter and video sharing websites such as youtube in their daily life .they are very active online ; the 2007 national assessment center s survey of gang members found that 25% of individuals in gangs use the internet for at least 4 hours a week .gang members typically use social networking sites and social media to develop online respect for their street gang and to post intimidating , threatening images or videos .this `` cyber- '' or `` internet banging '' behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media .stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation , to sell drugs , and to celebrate their illegal activities .gang members are able to post publicly on twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium .police departments across the united states instead rely on manual processes to search social media for gang member profiles and to study their posts .for example , the new york city police department employs over 300 detectives to combat teen violence triggered by insults , dares , and threats exchanged on social media , and the toronto police department teaches officers about the use of social media in investigations .officer training is broadly limited to understanding policies on using twitter in investigations and best practices for data storage .the safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity .the need for better tools for law enforcement can not be underscored enough .recent news reports have shown that many incidents involving gangs start on twitter , escalate over time , and lead to an offline event that could have been prevented by an early warning .for example , the media reported on a possible connection between the death of a teenage rapper from illinois and the final set of tweets he posted .one of his last tweets linked to a video of him shouting vulgar words at a rival gang member who , in return , replied _`` i m a kill you '' _ on social media . in a following tweet ,the teenage rapper posted _`` i m on 069 '' _ , revealing his location , and was shot dead soon after that post .subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media .other reporting has revealed how innocent bystanders have also become targets in online fights , leaving everyone in a neighborhood at risk .this paper investigates whether gang member profiles can be identified automatically on twitter , which can enable better surveillance of gang members on social media . classifyingtwitter profiles into particular types of users has been done in other contexts , but gang member profiles pose unique challenges .for example , many twitter profile classifiers search for contextual clues in tweets and profile descriptions , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local , geographic context .this is illustrated in figure [ fig : twitterprofiles ] , which shows the twitter profile descriptions of two verified deceased gang members .the profile of provides evidence that he belongs to a rival gang of the black disciples by ` # bdk ` , a hashtag that is only known to those involved with gang culture in chicago .s profile mentions ` # pbg ` and our investigations revealed that this hashtag is newly founded and stands for the pooh bear gang , a gang that was formerly known as the insane cutthroat gangsters . given the very local , rapidly changing lexicon of gang members on social media , building a database of keywords , phrases , and other identifiers to find gang members nationally is not feasible .instead , this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage , profile images , and links to youtube videos reflecting their music culture .a large set of gang member profiles , obtained through a careful data collection process , is compared against non - gang member profiles to find contrasting features .experimental results show that using these sets of features , we can build a classifier that has a low false positive rate and a promising -score of 0.7755 .this paper is organized as follows .section [ sec : rr ] discusses the related literature and positions how this work differs from other related works .section [ sec : dc ] discusses the data collection , manual feature selection and our approach to identify gang member profiles .section [ sec : eval ] gives a detailed explanation for evaluation of the proposed method and the results in detail .section [ sec : con ] concludes the work reported while discussing the future work planned .gang violence is a well studied social science topic dating back to 1927 .however , the notions of `` cyber- '' or `` internet banging '' , which is defined as _`` the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization '' _ , was only recently introduced ._ introduced the concept of `` internet banging '' and studied how social media is now being used as a tool for gang self - promotion and as a way for gang members to gain and maintain street credibility .they also discussed the relationship between gang - related crime and hip - hop culture , giving examples on how hip - hop music shared on social media websites targeted at harassing rival gang members often ended up in real - world collisions among those gangs .et al . _ and patton _ et al ._ have also reported that street gangs perform internet banging with social media posts of videos depicting their illegal behaviors , threats to rival gangs , and firearms . the ability to take action on these discoveriesis limited by the tools available to discover gang members on social media and to analyze the content they post .recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure , function , and operation of gangs through what they post on social media .however , the architecture requires a set of gang member profiles for input , thus assuming that they have already been discovered .et al . _ devised a method to automatically collect tweets from a group of gang members operating in detroit , mi . however , their approach required the profile names of the gang members to be known beforehand , and data collection was localized to a single city in the country .this work builds upon existing methods to automatically discover gang member profiles on twitter .this type of user profile classification problem has been explored in a diverse set of applications such as political affiliation , ethnicity , gender , predicting brand loyalty , and user occupations .however , these approaches may utilize an abundance of positive examples in their training data , and only rely on a single feature type ( typically , tweet text ) .whereas most profile classifiers focus on a single _ type _ of feature ( e.g. profile text ) , we consider the use of a variety of feature types , including emoji , youtube links , and photo features .this section discusses the methodology we followed to study and classify the twitter profiles of gang members automatically .it includes a semi - automatic data collection process to discover a large set of verifiable gang member profiles , an evaluation of the tweets of gang and non - gang member posts to identify promising features , and the deployment of multiple supervised learning algorithms to perform the classification ..number of gang member profiles captured . [ cols="<,^",options="header " , ] for each 10-fold cross validation experiment , we report three evaluation metrics for the ` gang ' and ` non - gang ' classes , namely , the precision = , recall = , and -score = , where is the number of true positives , is the number of false positives , is the number of true negatives , and is the number of false negatives .we report these metrics for the positive ` gang ' and negative ` non - gang ' classes separately because of class imbalance in our dataset .table [ results : usertweetsuni ] presents the average precision , recall , and -score over the 10 folds for the single - feature and combined feature classifiers .the table includes , in braces ( ` \ { } ' ) , the number of gang and non - gang profiles that contain a particular feature type , and hence the number of profiles used for the 10-fold cross validation .it is reasonable to expect that _ any _ twitter profile is not that of a gang member , predicting a twitter user as a non - gang member is much easier than predicting a twitter user as a gang member .moreover false positive classifications of the ` gang ' class may be detrimental to law enforcement investigations , which may go awry as they surveil an innocent person based on the classifier s suggestion .we thus believe that a small false positive rate of the ` gang ' class to be an especially important evaluation metric .we say that a classifier is ` ideal ' if it demonstrates high precision , recall , and -score for the ` gang ' class while performing well on the ` non - gang ' class as well .the best performing classifier that considers single features is a random forest model over tweet features ( t ) , with a reasonable -score of 0.7229 for the ` gang ' class. it also features the highest -score for the ` non - gang ' class ( 0.9671 ) .its strong performance is intuitive given the striking differences in language as shown in figure [ fig : image2 ] and discussed in section [ sec : data_analysis_tweet_text ] .we also noted that music features offer promising results , with an -score of 0.6505 with a naive bayes classifier , as well as emoji features with an -score of 0.6067 also achieved by a naive bayes classifier .however , the use of profile data and image tags by themselves yield relatively poor -scores no matter which classifier considered. there may be two reasons for this despite the differences we observed in section [ sec : data_analysis ] .first , these two feature types did not generate a large number of specific features for learning .for example , descriptions are limited to just 160 characters per profile , leading to a limited number of unigrams ( in our dataset , 10 on average ) that can be used to train the classifiers .second , the profile images were tagged by a third party web service which is not specifically designed to identify gang hand signs , drugs and guns , which are often shared by gang members . this led to a small set of image tags in their profiles that were fairly generic , i.e., the image tags in figure [ fig : imagetags ] such as ` people ' , ` man ' , and ` adult ' . combining these diverse sets of features into a single classifier yields even better results . our results for _ model(1 ) _ show that the random forest achieves the highest -scores for both ` gang ' ( 0.7364 ) and ` non - gang ' ( 0.9690 ) classes _ and _ yields the best precision of 0.8792 , which corresponds to a low false positive rate when labeling a profile as a gang member . despite the fact that it has lower positive recall compared to the second best performing classifier ( a random forest trained over only tweet text features ( t ) ) , for this problem setting , we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a ` gang ' label to a non - gang member . when we tested _model(2 ) _ , a random forrest classifier achieved an -score of 0.7755 ( improvement of 7.28% with respect to the best performing single feature type classifier ( t ) ) for ` gang ' class with a precision of 0.8961 ( improvement of 6.26% with respect to ( t ) ) and a recall of 0.6994 ( improvement of 9.26% with respect to ( t ) ) . _model(2 ) _ thus outperforms _model(1 ) _ , and we expect its performance to improve with the availability of more training data with all feature types .we also tested the trained classifiers using a set of twitter profiles from a separate data collection process that may emulate the classifier s operation in a real - time setting . for this experiment, we captured real - time tweets from los angeles , ca and from ten south side , chicago neighborhoods that are known for gang - related activities using the twitter streaming api .we consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set .we ultimately collected 24,162 twitter profiles : 15,662 from los angeles , and 8,500 from chicago .we populated data for each profile by using the 3,200 most recent tweets ( the maximum that can be collected from twitter s api ) for each profile . since the 24,162 profiles are far too many to label manually , we qualitatively study those profiles the classifier placed into the ` gang ' class .we used the training dataset to train our best performing random forest classifier ( which use all feature types ) and tested it on the test dataset .we then analyzed the twitter profiles that our classifier labeled as belonging to the ` gang ' class .each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them , gang names or gang - related hashtags in their profile descriptions , frequent use of curse words , and the use of terms such as _ my homie " _ to refer to self - identified gang members .representative tweets extracted from those profiles are depicted in figure [ testset_examples ] .the most frequent words found in tweets from those profiles were _ shit , nigga , got , bitch , go , fuck etc . _ and their user profiles had terms such as _ free , artist , shit , fuck , freedagang , and ripthefallen_.they had frequently used emojis such as _ face with tears of joy , hundred points symbol , fire , skull , money bag , and pistol_. for some profiles , it was less obvious that the classifier correctly identified a gang member .such profiles used the same emojis and curse words commonly found in gang members profiles , but their profile picture and tweet content was not indicative of a gang affiliation . in conclusion , we find that in a real - time - like setting , the classifier to be able to extract profiles with features that strongly suggest gang affiliation .of course , these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion , especially in the context of a law enforcement investigation .we refrain from reporting any profile names or specific details about the profiles labeled as a ` gang ' member to comply with the applicable irb governing this human subject research .this paper presented an approach to address the problem of automatically identifying gang member profiles on twitter . despite the challenges in developing such automated systems , mainly due to difficulties in finding online gang member profiles for developing training datasets , we proposed an approach that uses features extracted from textual descriptions , emojis , images and videos shared on twitter ( textual features extracted from images , and videos ) . exploratory analysis of these types of features revealed interesting , and sometimes striking differences in the ways gang and non - gang members use twitter .classifiers trained over features that highlight these differences , were evaluated under 10-fold cross validation .our best classifier achieved a promising -score of 0.7755 over the ` gang ' profiles when all types of features were considered .future work will strengthen our training dataset by including more gang member twitter profiles by searching for more location - independent keywords .we also plan to develop our own image classification system specifically designed to classify images found on gang member profiles .we would also like to experiment with building dictionaries that contain gang names to understand whether _`` having a gang name in the profile description '' _ as a feature can improve our results .finally , we would also like to study how can we further improve our classifier models using word embeddings and social networks of known gang members .we are thankful to uday kiran yeda for helping us with data collection .we acknowledge partial support from the national science foundation ( nsf ) award : cns-1513721 : `` context - aware harassment detection on social media '' , national institutes of health ( nih ) award : mh105384 - 01a1 : `` modeling social behavior for healthcare utilization in depression '' and grant no . 2014-ps - psn-00006 awarded by the bureau of justice assistance .the bureau of justice assistance is a component of the u.s .department of justice s office of justice programs , which also includes the bureau of justice statistics , the national institute of justice , the office of juvenile justice and delinquency prevention , the office for victims of crime , and the smart office .points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the u.s .department of justice , nsf or nih . | most street gang members use twitter to intimidate others , to present outrageous images and statements to the world , and to share recent illegal activities . their tweets may thus be useful to law enforcement agencies to discover clues about recent crimes or to anticipate ones that may occur . finding these posts , however , requires a method to discover gang member twitter profiles . this is a challenging task since gang members represent a very small population of the 320 million twitter users . this paper studies the problem of automatically finding gang members on twitter . it outlines a process to curate one of the largest sets of verifiable gang member profiles that have ever been studied . a review of these profiles establishes differences in the language , images , youtube links , and emojis gang members use compared to the rest of the twitter population . features from this review are used to train a series of supervised classifiers . our classifier achieves a promising score with a low false positive rate . street gangs , twitter profile identification , gang activity understanding , social media analysis |
we consider a line lightning protection device ( llpd ) consisting of a series of arc gaps between a power line and the ground . during normal grid operation ,the device works as an insulator since the voltage over the device is insufficient to cause dielectric breakdown and arcing . in the case of a lightning strike ,on the other hand , the large overvoltage will ignite a series of electric arcs in the device .this allows the ligthning current to be redirected to the ground , thereby protecting the insulation of the power line .a typical llpd is shown in fig .[ fig : streamer_device ] . an important characteristic of the llpd is its ability to quench the current following a lightning strike .experiments performed on real devices have consistently demonstrated two modes of current quenching : in the case of _ current zero quenching _ ( zq ) , current will flow through the device until it passes through zero due to the vanishing grid voltage .this results in arcing times of the order to 10 ms . in the case of _impulse quenching _ ( iq ) , the current is suppressed within less than 0.1 ms after the lightning strike , leading to significantly less erosion of the arcing chambers and the electrodes .the two modes of quenching are similar to the behavior of circuit breakers .the arc voltage in high - voltage breakers is much smaller than the grid voltage and the current will continue to flow unimpeded through the arc until the next current zero , corresponding to zq . in the case of low voltage breakers ,typically equipped with splitter plates , the total arc voltage is higher than the grid voltage and the arc current is limited , corresponding to iq .there is one very important difference between lightning protection devices and circuit breakers , however . in the case of a circuit breaker, the grid current has to be interrupted .the llpd , on the other hand , only has to transmit the current from the lightning pulse .any follow current due to the grid voltage is detrimental to the device and should be quenchend as quickly as possible .the purpose of this paper is to provide a physical explanation of the two modes of current quenching and to develop a simulation model suitable for virtual product development .we begin by looking at a simple circuit with the arc gap represented by the cassie - mayr arc model .a dimensional analysis of this model leads to a simple criterion for iq in terms of the grid voltage , the maximum follow current , and the power loss of the arc . in a second step ,we develop a complete 3d arc simulation based on the assumption of a thermal plasma and use it to simulate both zq and iq .this model allows us to examine the shape of the arc in the real arcing chamber geometry and to study the interaction of the arc with the circuit .the difference between impulse quenching and current zero quenching can be illustrated easily in the laboratory using the setup depicted in fig .[ fig : circuit - arc ] . the left part of the circuit represents the oscillating grid voltage and the right part represents the arcing device in series with the footing resistance of the tower and the ground .( 0,0 ) to[capacitor , l^= ( 0,4 ) ( 3,4 ) to[inductor , l^= ( 3,0 ) ( 0,0 ) ; ( 3,4 ) to [ resistor , l^= ] ( 7,4 ) to [ resistor , l^= ] ( 7,0 ) ( 3,0 ) ; the differential equations describing this system are and where represents the current through the coil and the current through the two resistances . the resistance due to arcing in the llpd is modelled by a simple form of the cassie - mayr equation , this model has the advantage that it describes the arc behavior using two physically meaningful parameters : is the intrinsic time scale of the arc , determining how fast it responds to a change in the current , and is the power loss of the arc due to convection and radiation .an often used approximation for the loss - function is to assume that the current density of the arc is constant , i.e. the radius is proportional to , and that the arc only loses energy from its surface .this leads to with .another way of understanding the importance of the exponent is to consider the stationary arc voltage from ( [ cmeq1 ] ) .for we obtain the important point is that that the arc shows _ negative differential resistance _ , meaning that the arc voltage decreases with increasing current for any . for ,the arc voltage is approximately independent of the current .the derivation below is valid for any and we shall use for illustration . before solving the circuit equations , it makes sense to write them in dimensionless form using the circuit parameters .the charging voltage of the capacitor determines the voltage level and the time scale is defined through the corresponding scale for the current follows from the unit for resistance will then be and the unit for power we write all physical quantities as a product of the physical scale and a dimensionless quantity according to with these definitions , the equations for the circuit are given by and [ eqdl3 ] where is a dimensionless constant determining power loss of the arc , i.e. , the loss - function is written as this new set of equations ( [ eqdl1])-([eqdl4 ] ) only contains dimensionless constants , and .typical examples of zero quenching ( zq ) and impulse quenching ( iq ) are illustrated in fig .[ fig : iq - zq ] .the arc is ignited by switching the value of from a very large constant value ( in our case ) to at time , where the capacitor is fully charged . in the case of zq, the arc will continue to burn until the grid voltage across the arc gap vanishes and the arc is extinguished . in the case of iq ,the grid voltage is insufficient to maintain the arc and it vanishes at a time scale of the order to . [ cols="^,^ " , ]we have analyzed the two modes of current quenching appearing in llpds and provided a theoretical explanation for the phenomena .a dimensional analysis shows that if the cooling power of the arc is proportinal to the product of the grid voltage an the maximum follow current ( ) , we can guarantee impulse quenching with a very short follow current , leading to considerably less erosion of the the device .this has important consequences for applications .furthermore , we have demonstrated that quenching power of the arc can be simulated very well using fully resolved 3d simulations based on the magnetohydrodynamic equations for a thermal plasma .as expected , radiation is the main mechanism by which the arc loses energy .this means that the current - voltage characteristics of the arc are sensitive to the absorption spectrum of the plasma .more effort will have to go into the definition and computation of effective absorption coefficients for the plasma and this will be subject of an upcoming publication .in addition , more detailed modeling will also have to include the effects of contact erosion and ablation of wall material , changing both the pressure in the arcing chamber and the composition of the plasma .the 3d simulations of the arcing phenomenon will make it possible to design future llpds on the computer .10 g. v. podporkin , e.yu .enkin , e. s. kalakutsky , v. e. pilshikov , and a. d. sivaev .lightning protection of overhead lines rated at 335 kv and above with the help of multi - chamber arresters and insulator - arresters . in _ electromagnetic compatibility ( apemc ) , 2010 asia - pacific symposium on _ , pages 12471250 , 2010 .g. v. podporkin , e.yu .enkin , e. s. kalakutsky , v. e. pilshikov , and a. d. sivaev .development of multi - chamber insulator - arresters for lightning protection of 220 kv overhead transmission lines . in _ lightning protection( xi sipda ) , 2011 international symposium on _ , pages 160165 , 2011 . yongjoong lee , henrik nordborg , yongsug suh , and peter steimer .arc stability criteria in ac arc furnace and optimal converter topologies . in _pec 07 - twenty - second annual ieee applied power electronics conference and exposition _ ,pages 12801286 , 2007 . | we develop a consistent model for a line lightning protection device and demonstrate that this model can explain the two modes of current quenching impulse quenching and current zero quenching observed in such devices . a dimensional analysis shows that impulse quenching can always be obtained if the power loss from the electric arcs is large enough as compared to , where is the grid voltage and is the maximum follow current after a lightning strike . we further show that the two modes of quenching can be reproduced in a full 3d arc simulations coupled to the appropriate circuit model . this means the arc simulations can be used for optimization and development of future llpds . june 2016 _ keywords _ : arc simulation , lightning protection , current quenching , radiation |
the applications of pedestrians dynamics range from the safety of large events to the planning of towns with a view to pedestrian comfort . because of the computational effort involved with an experimental analysis of the complex collective system of pedestrians behavior , computer simulations are run .models continuous in space are one possibility to describe this complex collective system . in developing a model , we prefer to start with the simplest case : single lane movement .if the model is able to reproduce reality quantitatively and qualitatively for that simple case , it is a good candidate for adaption to complex two - dimensional conditions . also in single file movement pedestrians interact in many ways and not all factors , which have an effect on their behavior , are known .therefore , we follow three different modeling approaches in this work . all of them underlie diverse concepts in the simulation of human behavior .+ this study is a continuation and enlargement of the validation introduced in . for validation , we introduce two criteria : on the one hand, the relation between velocity and density has to be described correctly .this requirement is fulfilled , if the modeled data reproduce the fundamental diagram . on the other hand, we are aiming to reproduce the appearance of collective effects . a characteristic effect for the single file movement are stop - and - go waves as they are observed in experiments .we obtained all empirical data from several experiments of the single file movement .there a corridor with circular guiding was built , so that it possessed periodic boundary conditions .the density was varied by increasing the number of the involved pedestrians . for more information about the experimental set - up ,see , .the first two models investigated are force based and the dynamics are given by the following system of coupled differential equations where is the force acting on pedestrian .the mass is denoted by , the velocity by and the current position by .this approach derives from sociology . herepsychological forces define the movement of humans through their living space .this approach is transferred to pedestrians dynamics and so is split to a repulsive force and a driven force . in our casethe driving force is defined as where is the desired velocity of a pedestrian and their reaction time .the other model is event driven .a pedestrian can be in different states . a change between these statesis called event .the calculation of the velocity of each pedestrian is straightforward and depends on these states .the first spatially continuous model was developed by helbing and molnr and has often been modified . according to the repulsive force for pedestrian is defined by with , \tau=0.2\ , [ s] ] . in thismodel pedestrians possess a degree of foresight , in addition to the current state of a pedestrian at time step .this approach considers an extrapolation of the velocity to time step . for it employs the linear relation between the velocity and the distance of a pedestrian to the one in front . for ] this reproduces the empirical data .so with ( [ v(d ) ] ) can be calculated from which itself is a result of the extrapolation of the current state with .finally , the repulsive force is defined as obviously the impact of the desired velocity in the driven force is negated by the one in the repulsive term . after some simulation time , the system reaches an equilibrium in which all pedestrians walk with the same velocity . in order to spread the values and keep the right relation between velocity and density , we added a fluctuation parameter . distributed in the interval ] and ] .we use the same method of measurement for the modeled and empirical data .the fundamental diagrams are displayed in fig .[ fd ] , where is the number of the pedestrians .the velocities of the social force model are independent of the systems density and nearly equal to the desired velocity ] to the model with foresight a good agreement with experimentis obtained for densities of one and two persons per meter .the irregularities caused by this parameter are equal to the irregularities of the pedestrians dynamic .nevertheless , this does not suffice for stopping so that stop - and - go waves appear , fig.[vor2 ] and fig.[vor3 ] . with the adaptive velocity modelstop - and - go waves already arise at a density of one pedestrian per meter , something that is not seen in experimental data .however , this model characterizes higher densities well . so in comparison with fig .[ ed3 ] and fig .[ emp3 ] the stopping - phase of the modeled data seems to last for the same time as in the empirical data .but there are clearly differences in the acceleration phase , the adaptive velocity models acceleration is much lower than seen in experiment .finally other studies of stop - and - go waves have to be carried out .the occurrence of this phenomena has to be clearly understood for further model modifications .therefore it is necessarry to measure e. g. the size of the stop - and - go wave at a fixed position .unfortunately it is not possible to measure over a time interval , because the empirical trajectories are only available in a specific range of 4 meters .the well - known and often used social force model is unable to reproduce the fundamental diagram . the model with foresight provides a good quantitative reproduction of the fundamental diagram .however , it has to be modified further , so that stop - and - go waves could be generated as well .the model with adaptive velocities follows a simple and effective event driven approach . with the included reaction time , it is possible to create stop - and - go waves without unrealistic phenomena , like overlapping or interpenetrating pedestrians .all models are implemented in c and run on a simple pc .they were also tested for their computing time in case of large system with upto 10000 pedestrians .the social force model offers a complexity level of , whereas the other models only have a level of . for this reason the social force model is not qualified for modeling such large systems .both other models are able to do this , where the maximal computing time is one sixth of the simulated time . in the future, we plan to include steering of pedestrians . for these modelsmore criteria , like the reproduction of flow characteristics at bottlenecks , are necessarry .further we are trying to get a deeper insight into to occurrence of stop - and - go waves . | several spatially continuous pedestrian dynamics models have been validated against empirical data . we try to reproduce the experimental fundamental diagram ( velocity versus density ) with simulations . in addition to this quantitative criterion , we tried to reproduce stop - and - go waves as a qualitative criterion . stop - and - go waves are a characteristic phenomenon for the single file movement . only one of three investigated models satisfies both criteria . |
cooperation in cellular networks has been recently suggested as a promising scheme to improve system performance , especially for cell - edge users , . in this work ,we study a cooperation model where the positions of base stations ( bss ) follow a poisson point process ( p.p.p . )distribution and where voronoi cells define the planar areas associated with them .the approach is known to provide closed ( or integral ) form expressions for important performance metrics averaged out over such large random networks .we consider here a specific scheme of cooperation with coherent transmission , where at most two bss can serve a single user .the scheme is based on the ideas of conferencing and user - power splitting .it originates from the seminal work of willems for the cooperative multiple access channel ( mac ) and its gaussian variant in .an adaptation of the idea in a network - mimo setting is proposed by zakhour and gesbert for the model in , where the beamforming vectors take the role of signal weights and define the power - split ratios .our problem position within the framework of stochastic geometry , extends previous work by andrews , baccelli and ganti to include cooperative schemes .more specifically , for the service of each user , either one or two bss are involved ( section [ sii ] ) .if two , these cooperate by exchange of information with conferencing over some backhaul link .we assume that , in addition to the user data , only part of the total channel state is available and exchanged .we choose to analyze schemes with partial knowledge in order to investigate possible benefits of cooperation without the excessive costs from full channel adaptation .specifically , only the phase of the channel from the second closest transmitter to the user is known , while the first bs is informed over this and transmits coherently by appropriate choice of its own phase .the scheme considers a fixed transmission power budget per user - which is split between the two bss and a common message is encoded .the two transmitted common signals add up in - phase at the user receiver , resulting in an extra term for the beneficial signal ( section [ siii ] ) . later in the work , extra knowledge of the interference from the second bs neighbour to the reference useris considered available .the cooperative pair then transmits orthogonaly to this signal by application of dirty paper coding ( section [ sivd ] ) .each user chooses on its own to be served with or without bs cooperation .the decision is directed by a family of geometric policies , which depend on the ratio of the user distance to its first and second closest geographic bs neighbour and some design parameter , left as optimization variable ( section [ siv ] ) . in this waythe plane is divided into non - overlapping zones of cooperation ( ) and no cooperation ( ) .an exact expression of the coverage probability in the network under study is derived ( section [ sv ] ) .numerical evaluation allows one to analyze coverage benefits compared to the non - cooperative case .these benefits are more important with dirty paper coding ( section [ svi ] ) .it is concluded ( section [ svii ] ) that cooperation can significantly improve coverage without exploitation of further network resources ( frequency , time , power ) , even with schemes of reduced channel state information .this work continues the line of research on applications of point processes to wireless networks and takes a step further to consider cooperative transmission in cellular networks . during the last yearsimportant results have been derived by use of stochastic geometry tools .the theoretical aspects of point processes and its applications in telecommunications can be found in the book of baccelli and baszczyszyn .important contributions on the capacity of wireless networks appear in the book of weber and andrews in and for k - tier networks in the work of dhillon et al , as well as keeler , baszczyszyn and karray in .haenggi and ganti have investigated the modeling and analysis of network interference by use of point processes . in the problem of cooperation , parallel to our work akoum andheath have provided approximative performance evaluation when bss are randomly grouped into cooperative clusters . the transmission scheme they consider is intercell interference nulling .their approach differs from our work , because we consider optimal geographic association of each user with a pair of bss and coherent transmission ( after conferencing ) from the latter .all proofs of the analysis in this paper can be found in our extended version .for the model under study the bss considered are equipped with a single antenna and are positioned at the locations of atoms from the realization of a planar p.p.p . with intensity , denoted by .a planar tessellation , called the 1-voronoi diagram , partitions ( up to lebesgue measure zero ) the plane into subregions called cells .the * 1-voronoi cell * associated with is the locus of all points in which are closer to than to any other atom of .we consider euclidean distance .then when the 1-voronoi cells of two atoms share a common edge , they constitute * delaunay neighbours * .a dual graph of the 1-voronoi tessellation called the delaunay graph is constructed if delaunay neighbours are connected by an edge .the 2-voronoi diagram consitutes another partition ( up to lebesgue measure zero ) of the plane . specifically , the * 2-voronoi cell * associated with , is the locus of all points in closer to than to any other atom of , i.e. the 1-voronoi tesselation for randomly positioned atoms is shown in fig . [fig : examplecoop04 ] and [ fig : examplecoop09 ] . inboth , the coloured area depicts a 2-voronoi example region . in the present work we consider a geometric cooperation scenario based on the following assumptions : * each bsis connected via backhaul links of infinite capacity with all its delaunay neighbours . *exactly one user with a single antenna is initially associated with every bs .each user is located randomly at some point within the 1-voronoi cell of its bs and we write . *each user may be served by either one or two bss . if two , these correspond to the atoms of which are its * first * and * second closest geographic neighbour*. if one , it is just the first closest neighbour .we use the notation and when referring to these neighbours . by definitionthe user belongs to .* from the point of view of a bs located at , we refer to the user in its 1-voronoi cell as the * primary user * and to all other users served by it but located outside the cell as the * secondary users*. these constitute a set , with cardinality that ranges between zero and the number of delaunay neighbours , depending on the users position relative to . the distance between user and its first ( resp .second ) closest bs neighbour equals ( ) .the second nearest bs can only be one of the delaunay neighbours of .the communications scenario in this work applies the following idea . when two bss cooperatively serve a user in the downlink , its signal is split into one common part served by both , and private parts served by each one of the involved bss .the common part contains information shared by both transmitters after communication between them over a reliable conferencing link .more specifically , for each primary user located within the 1-voronoi cell of atom , consider a signal to be transmitted .the user signals are independent realizations of some random process with power = p>0 ] , .the signal for user in the downlink is split in two parts : * a * private ( pr ) * part sent to from its first bs neighbour , denoted by .the second neighbour does not have a private part to send . * a * common ( c ) * part served by both and , which is denoted by .this part is communicated to both bss over the backhaul links . for the sake of clarity, we will use the notation , for the common signal transmitted from and respectively , although the two signals are actually scaled versions of each other . the two parts are uncorrelated random variables ( r.v.s ) , in other words =0 ] , named the * power - split ratio*. = \left(1 - 2a_i\right)p,\mathbb{e}\left[\left|s_{i}^{(c1)}\right|^2\right ] = \mathbb{e}\left[\left|s_{i}^{(c2)}\right|^2\right ] = a_ip.\nonumber\end{aligned}\ ] ] each bs transmits a total signal . by applying superposition coding ,this signal consists of the private and common part for its primary user and the common parts for all its secondary users , . the bs signals propagate through the wireless link to reach the users .this process degrades the signal power of bs received at the location of user , by a factor which depends on the distance and by the power of a * complex valued * random fading component ( is the _ imaginary number _ , not to be confused with the _ index _ used always as subscript ) .the fading power is an independent realization of a unit - mean exponential random variable and the phase is an independent realization of a uniform random variable on .we denote the total gain from the first ( resp .second ) neighbour ( ) to user by ( ) and the total gain from the first ( resp .second ) neighbour ( ) of some other user , to user by ( ) , with .the related channel phases are ( ) and ( ) .the total signal received at user is where . the noise is a realization of the r.v . , which follows the normal distribution . in the above the signal sum over is the interference received by user .the beneficial signal received at the user location is equal to and a similar term with the adequate indexing appears for each interference term due to user .the term with the is an extra term which is related to the phases of the channels from the first and second neighbour and can be positive or negative depending on the phase difference . by controlling this term we can maximize the received beneficial signal . _if the phase is known and communicated to the first neighbour _ , the latter can choose to transmit with .as a result the extra term is maximized , since .the same action can not be applied to the interference terms .the control affects only the primary user of each bs .the emitted signal for some user is interference for some other with a random fading phase in .the expected value of the interference terms is =0 ] such that the policies are * user - defined * and * geometric * because the choice to cooperate or not depends on the relative position of each user to its two closest bss .the ratio ] .[ lemjointd ] using the p.d.f . above andthe geometric policies , it can be shown that the parameter ] .we further provide an interesting property of the r.v . of the fading for with relation to the r.v . , which is the case for .we use the notion of the _ laplace - stieltjes transform ordering _ based on which , the r.v . dominates the r.v . and we write , if , for all .[ laplaceord ] given and the two r.v.s and from ( [ rvz ] ) , it holds . we describe the interference as a shot - noise field ( * ? ? ?* ch.2 ) generated by a point process outside a ball of radius .we consider all the power splitting decisions of primary users ( either or ) related to bss with distance from the origin .the decisions are determined by the value of the global parameter of the geometric policies .the interference received at is equal to we associate each bs with a r.v . such that ( from lemma [ lemrho ] ) this r.v .models the randomness of user position within each 1-voronoi cell , which further determines the action chosen for this user , depending on the ratio . *if , an independent mark is associated with the bs .the mark is equal to .the signal has to traverse a distance of from the closest neighbour of user . * if , an independent mark is associated with the bs .this is the case of full cooperation , where the interfering signal due to user is coherently emitted from its two closest neighbours . here, we make the * far field approximation * , so that the distances of the two cooperating atoms to the typical location are treated as equal .based on this approximation , bss with primary users requiring , are considered to emit the entire signal , . related to the mark follows the exponential - or equivalently distribution with expected value , whereas the r.v . related to follows the , with the same expected value . in other words ,the path - loss of the interfering signals is in expectation equal in both cases .the interference r.v .is given by the first two terms come from the interference created by the second neighbour lying on the boundary of the ball .[ thlapli ] the lt of the interference r.v . for the model under study , with exponential fast - fading power , is equal to where the expected value for the interference r.v .is equal to = \frac{p}{\left(\beta-2\right)r_2^{\beta}}\left(\beta-2 + 2\pi\lambda r_2 ^ 2\right),\end{aligned}\ ] ] and is independent of the parameter .the second geographic bs neighbour is the dominant factor of interference , due to its proximity to the typical location .a mark ( either or ) will be strong at the point .in this subsection we will consider an ideal scenario , where the interference created by the second closest bs can be cancelled out perfectly in the case of full cooperation , by means of coding .this requires precise * knowledge * by the first neighbour of the interfering signal from the primary user and all possible secondary users served by , which is extra information for the system .if such information is available , the encoding procedure for the typical location can be projected on the signal space of , achievable by dirty paper coding ( dpc , see ) so that the effect of on interference is eliminated .it the expression ( [ sinro2 ] ) we can then substitute the variable by , for the case of .if chooses , the elimination is not possible .the new r.v .is derived by just omitting the interference part from the second closest bs neighbour .its lt is equal to where is given in ( [ eachid ] ) .the expected value of can be shown to be less than ] is equal to ( [ tintall ] ) . in this expression , is given in ( [ ltinterfd ] ) , ( [ eachid ] ) and in ( [ ltz ] ) . for the case of dirty paper coding , should be replaced by given in ( [ ltinterfdcancel ] ) .we have numerically evaluated the integrals in ( [ tintall ] ) for parameter density , path - loss exponent , per - user power and noise .the two cases without and with dpc are shown in fig .[ fig : nodpcf ] and fig .[ fig : dpc ] respectively .both figures also include plots from simulations , so that the validity of our approximations can be guaranteed .the simulation results show the coverage probability , taken as an average of 10,000 realizations of different random bs topologies with expected number of atoms =20 $ ] , uniformly positioned in an area of . in each figure , three curves are produced for the numerical evaluation and three for the simulations : ( a ) coverage with everywhere , ( b ) coverage with everywhere , ( c ) coverage with optimal . in ( c )the parameter is optimally chosen , so that the coverage is maximized .[ fig : nodpcf ] illustrates that cooperation with some can be optimal for , whereas ( everywhere ) is always optimal for .the maximum gain in coverage with cooperation is between to .the case of dpc for the interference from gives more substantial coverage gains , as shown in fig .[ fig : dpc ] .the gains appear in the entire domain of and reach a maximum of for .the figures illustrate the fact that cooperation becomes more beneficial to the network when more information is exploited .another important quantitative benefit , not visible in the total coverage probability evaluation we show here , is that the coverage area shapes change in favor of cell - edge users while reduces from to .in the present work , we evaluated the coverage probability of a cellular network , when bss can cooperate pairwise for the service of each user . for thiswe have applied tools from stochastic geometry .cooperation is understood here as coherent transmission of a common message to the user from its two closest bss .this message is exchanged between them by conferencing , together with the value of the channel phase from the second neighbour .the user can choose between or based on the suggested geometric policies .closed form expressions for the coverage probability have been derived , whose evaluation quantifies the benefits resulting from cooperation .the benefits are larger when dpc , to avoid second neighbour interference , is applied . | we study a cooperation model where the positions of base stations follow a poisson point process distribution and where voronoi cells define the planar areas associated with them . for the service of each user , either one or two base stations are involved . if two , these cooperate by exchange of user data and reduced channel information ( channel phase , second neighbour interference ) with conferencing over some backhaul link . the total user transmission power is split between them and a common message is encoded , which is coherently transmitted by the stations . the decision for a user to choose service with or without cooperation is directed by a family of geometric policies . the suggested policies further control the shape of coverage contours in favor of cell - edge areas . analytic expressions based on stochastic geometry are derived for the coverage probability in the network . their numerical evaluation shows benefits from cooperation , which are enhanced when dirty paper coding is applied to eliminate the second neighbour interference . |
kernel techniques are now a standard tool of statistical practice and kernel versions of many methods of classical multivariate statistics have now been created .a few important examples can be found in ( see the description of kernel pca , pages 4145 ) and ( for kernel ica ) , for instance .there are several ways to describe kernel methods , but one of them is to think of them as classical multivariate techniques using generalized notions of inner - product . a basic input in these techniques is a kernel matrix , that is , an inner - product ( or gram ) matrix , for generalized inner - products . if our vectors of observations are , the kernel matrices studied in this paper have entry or , for a certain .popular examples include the gaussian kernel [ entries , the sigmoid kernel [ entries and polynomial kernels [ entries .we refer to for more examples .as explained in , for instance , , kernel techniques allow practitioners to essentially do multivariate analysis in infinite - dimensional spaces , by embedding the data in a infinite - dimensional space through the use of the kernel .a nice numerical feature is that the embedding need not be specified , and all computations can be made using the finite - dimensional kernel matrix .kernel techniques also allow users to do certain forms of nonlinear data analysis and dimensionality reduction , which is naturally very desirable . and are two interesting relatively recent papers concerned broadly speaking with the same types of inferential questions we have in mind and investigate in this paper , though the settings of these papers is quite different from the one we will work under .kernel matrices and the closely related laplacian matrices also play a central role in manifold learning [ see , e.g. , and for an overview of various techniques ] . in `` classical '' statistics , they have been a mainstay of spatial statistics and geostatistics in particular [ see ] . in geostatistical applications , it is clear that the dimension of the data is at most 3 . also , in applications of kernel techniques and manifold learning , it is often assumed that the data live on a low - dimensional manifold or structure , the kernel approach allowing us to somehow recover ( at least partially ) this information .consequently , most theoretical analyses of kernel matrices and kernel or manifold learning techniques have focused on situations where the data is assumed to live on such a low - dimensional structure . in particular , it is often the case that asymptotics are studied under the assumption that the data is i.i.d . from a fixed distribution independent of the number of points .some remarkable results have been obtained in this setting [ see and also ] .let us give a brief overview of such results .in , the authors prove that if are i.i.d . with distribution , under regularity conditions on the kernel , the largest eigenvalue of the kernel matrix , with entries converges to the largest eigenvalue of the operator defined as in this important paper , the authors were also able to obtain fluctuation behavior for these eigenvalues , under certain technical conditions [ see theorem 5.1 in ] .similar first - order convergence results were obtained , at a heuristic level but through interesting arguments , in .these results gave theoretical confirmation to practitioners intuition and heuristics that the kernel matrix could be used as a good proxy for the operator on , and hence kernel techniques could be explained and justified through the spectral properties of this operator .to statisticians well versed in the theory of random matrices , this set of results appears to be similar to results for low - dimensional covariance matrices stating that when the dimension of the data is fixed and the number of observations goes to infinity , the sample covariance matrix is a spectrally consistent estimator of the population covariance matrix [ see , e.g. , ] .however , it is well known [ see , e.g. , , , ] that this is not the case when the dimension of the data , , changes with , the number of observations , and in particular when asymptotics are studied under the assumption that has a finite limit .we refer to the asymptotic setting where and both tend to infinity as the `` high - dimensional '' setting .we note that given that more and more datasets have observations that are high dimensional , and kernel techniques are used on some of them [ see ] , it is natural to study kernel random matrices in the high - dimensional setting .another important reason to study this type of asymptotics is that by keeping track of the effect of the dimension of the data , , and of other parameters of the problem on the results , they might help us give more accurate prediction about the finite - dimensional behavior of certain statistics than the classical `` small , large '' asymptotics .an example of this phenomenon can be found in the paper where it turned out in simulation that some of the doubly asymptotic results concerning fluctuation behavior of the largest eigenvalue of a wishart matrix with identity covariance are quite accurate for and as small as 5 or 10 , at least in the right tail of the distribution .[ we refer the interested reader to for more details on the specific example we just described . ]hence , it is also potentially practically important to carry out these theoretical studies for they can be informative even for finite - dimensional considerations . the properties of kernel random matrices under classical random matrix assumptionshave been studied by the author in the recent .it was shown there that when the data is high dimensional , for instance , and the operator norm of is , for example , bounded , kernel random matrices essentially act like standard gram/``covariance matrices , '' up to recentering and rescaling , which depend only on .naturally , a certain scaling is needed to make the problem nondegenerate , and the results we just stated hold , for instance , when , for otherwise the kernel matrix is in general degenerate .we refer to for more details and discussions of the relevance of these results in practice . in limited simulations, we found that the theory agreed with the numerics even when was of the order of several 10 s and was not `` too small '' ( e.g. , ) .these results came as somewhat of a surprise and seemed to contradict the intuition and numerous positive practical results that have been obtained , since they suggested that the kernel matrices we considered were just a ( centered and scaled ) version of the matrix .however , it should be noted that the assumptions implied that the data was truly high dimensional .so an interesting middle ground , from modeling , theoretical and practical points of view is the following : what happens if the data does not live exactly on a fixed - dimensional manifold , but lives `` nearby ? '' in other words , the data is now sampled from a `` noisy '' version of the manifold .this is the question we study in this paper .we assume now that the data points we observe are of the form where is the `` signal '' part of the observations ( and live , for instance , on a low - dimensional manifold , e.g. , a three - dimensional sphere ) and is the noise part of the observations ( and is , e.g. , multivariate gaussian in dimension , where might be 100 ) .we think this is interesting from a practical standpoint because the assumption that the data is exactly on a manifold is perhaps a bit optimistic and the `` noisy manifold '' version is perhaps more in line with what statisticians expect to encounter in practice ( there is a clear analogy with linear regression here ) . from a theoretical standpoint ,such a model allows us to bridge the two extremes between truly low - dimensional data and fully high - dimensional data . from a modeling standpoint, we propose to scale the noise so that its norm stays bounded ( or does not grow too fast ) in the asymptotics .that way , the `` signal '' part of the data is likely to be affected but not totally drowned by the noise .it is important to note , however , that the noise is not `` small '' in any sense of the word it is of a size comparable with that of the signal . in the case of spherical noise( see below for details but note that the gaussian distribution falls into this category ) our results say that , to first - order , the kernel matrix computed from information data behaves like a kernel matrix computed from the `` signal '' part of the data , but , we might have to use a different kernel than the one we started with .this other kernel is quite explicit . in the case of dot - product kernel matrices [ i.e. , , the original kernel can be used ( under certain assumptions)so , to first - order , the noise part has no effect on the spectral properties of the kernel matrix .the results are different when looking at euclidean distance kernels [ i.e. , where the effect of the noise is basically to change the kernel that is used .this is in any case a quite positive result in that it says that the whole body of work concerning the behavior of kernel random matrices with low - dimensional input data can be used to also study the `` information '' case the only change being a change of kernels .the case of elliptical noise is more complicated .the dot - product kernels results still have the same interpretation .but the euclidean distance kernels results are not as easy to interpret .before we start , we set some notation . we use to denote the frobenius norm of the matrix [ so and to denote its operator norm , that is , its largest singular value .we also use to denote the euclidean norm of the vector . is shorthand for . unless otherwise noted , functions that are said to be lipschitz are lipschitz with respect to euclidean norm .we split our results into two parts , according to distributional assumptions on the noise .one deals with the gaussian - like case , which allows us to give a simple proof of the results .the second part is about the case where the noise has a distribution that satisfies certain concentration and ellipticity properties .this is more general and brings the geometry of the problem forward .it also allows us to study the robustness ( and lack thereof ) of the results to the sphericity of the noise , an assumption that is implicit in the high - dimensional gaussian ( and gaussian - like ) case .we draw some practical conclusions from our results for the case of spherical noise in section [ subsec : practicalconsequences ] .we first study a setting where the noise is drawn according to a distribution that is similar to a gaussian , but slightly more general .[ thm : infoplusnoisegaussiancase ] suppose we observe data in , with where where the -dimensional vector has i.i.d .entries with mean 0 , variance 1 , and fourth moment , and .we assume that there exists a deterministic vector and a real , possibly dependent on , such that . also , might change with but is assumed to remain bounded . are i.i.d . , and we also assume that and are independent .we consider the random matrices with entry where let us call .let be the matrix with entry assuming only that is bounded uniformly in , we have , for a constant independent of , and , .\ ] ] we place ourselves in the high - dimensional setting where and tend to infinity .we assume that , as tends to infinity . under these assumptions , for any fixed and , if we further assume that remains , for instance , bounded , the same result holds if we replace the diagonal of by , because and therefore .the approximating matrix we then get is the matrix with entry , where , that is , a `` pure signal '' matrix involving a different kernel from the one with which we started .we note that there is a potential measurability issue that we address in the proof .our theorem really means that we can find a random variable that dominates the `` random element '' and goes to 0 in probability .( this measurability issue could also be addressed through separability arguments but outer - probability statements suffice for our purposes in this paper . )a subcase of our result is the case of gaussian noise : then is and our result naturally applies .we also note that can change with .the class of functions we consider is fixed in the last statement of the theorem but if we were to look at a sequence of kernels we could pick a different function in the class for each [ the proof also applies to matrices with entries , where the functions considered also depend on , but we present the results with a function common to all entries ] .it should also be noted that the proof technique allows us to deal with classes of functions that vary with : we could have a varying .as ( [ eq : explicitcontrolerrorgaussianlikecase ] ) makes clear , the approximation result will hold as soon as the right - hand side of ( [ eq : explicitcontrolerrorgaussianlikecase ] ) goes to 0 asymptotically , that is , .finally , we work here with uniformly lipschitz functions .the proof technique carries over to other classes , such as certain classes of hlder functions , but the bounds would be different .proof of theorem [ thm : infoplusnoisegaussiancase ] the strategy is to use the same entry - wise expansion approach that was used in .to do so , we remark that remains essentially constant [ across ] in the setting we are considering this is a consequence of the `` spherical '' nature of high - dimensional gaussian distributions .we can therefore try to approximate by and all we need to do is to show that the remainder is small .we also note that if , as we assume , , then , since .- _ work conditional on _ , _ for _ .we clearly have let us study the various parts of this expansion .conditional on , if we call , we see easily that and note that , which we denote , has i.i.d .entries , with mean 0 , variance and fourth moment .we call and with this notation , we have therefore , for any function in , and hence , ^ 2 \leq2 c_0(n)^2 [ \beta_{i ,j}^2 + 4\alpha_{i , j}^2 ] .\ ] ] we naturally also have ^ 2 \leq2 c_0(n)^2 [ \beta _ { i , j}^2 + 4\alpha_{i , j}^2 ] .\ ] ] so we have found a random variable ] .one might be concerned about the measurability of by using outer expectations [ see , page 258 ] , we can completely bypass this potential problem . inwhat follows , we denote by an outer expectation .( though this technical point does not shed further light on the problem , it naturally needs to be addressed . ) hence , let us focus on for a moment .let us call .we first note that .in particular , so .therefore , .now recall the results found , for instance , in lemma a-1 in : if the vector has i.i.d .entries with mean 0 , variance and fourth moment , and if is a symmetric matrix , where is the hadamard product of with itself , that is , the entrywise product of two matrices . applying this result in our setting [ i.e. , using the moments ( given above ) of , which has i.i.d .entries , in the previous formula ] gives it is easy to see that , since and . therefore , we note that under our assumptions on and the fact that remains bounded in ( and therefore ) , this term will go to 0 as . on the other hand , because , and because and , we have hence , we have for a constant independent of , and , .\end{aligned}\ ] ] this inequality allows us to conclude that , for another constant , , \ ] ] since clearly , under the assumption that exists and is less than , we finally conclude that ,\ ] ] and ( [ eq : explicitcontrolerrorgaussianlikecase ] ) is shown . therefore , under our assumptions , hence , when and tend to , as announced in the theorem .the proof of theorem [ thm : infoplusnoisegaussiancase ] makes clear that the heart of our argument is geometric : we exploit the fact that is essentially constant across pairs .it is therefore natural to try to extend the theorem to more general assumptions about the noise distribution than the gaussian - like one we worked under previously .it is also important to understand the impact of the implicit geometric assumptions ( i.e. , sphericity of the noise ) that are made and in particular the robustness of our results against these geometric assumptions .we extend the results in two directions .first , we investigate the generalization of our gaussian - like results to the setting of euclidean - distance kernel random matrices , when the noise is distributed according to a distribution satisfying a concentration inequality multiplied by a random variable , that is , a generalization of elliptical distributions .this allows us to show that the gaussian - like results of theorem [ thm : infoplusnoisegaussiancase ] essentially hold under much weaker assumptions on the noise distribution , as long as the gaussian geometry ( i.e. , a spherical geometry ) is preserved ( see corollary [ coro : sphericalnoisekernels ] ) .the results of theorem [ thm : infoplusnoiseconccase ] show that breaking the gaussian geometry results in quite different approximation results .we also discuss in theorem [ thm : infoplusnoiseconccasedotproductskernels ] the situation of inner - product kernel random matrices under the same `` generalized elliptical '' assumptions on the noise .we have the following theorem .[ thm : infoplusnoiseconccase ] suppose we observe data in , with we place ourselves in the high - dimensional setting where and tend to infinity .we assume that . are i.i.d . with , and we also assume that and are independent . are random variables independent of .we now assume that the distribution of is such that , for any 1-lipschitz function , if , where for simplicity we assume that , and are independent of .we call and assume that stays bounded as .we assume that , ] , and we consider the random matrices with entry let us call the matrix with entry we have , for any given and , we have the following corollary in the case of `` spherical '' noise , which is a generalization of the gaussian - like case considered in theorem [ thm : infoplusnoisegaussiancase ] .[ coro : sphericalnoisekernels ] suppose we observe data in , with where and satisfy the same assumptions as in theorem [ thm : infoplusnoiseconccase ] [ with . then the results of theorem [ thm : infoplusnoiseconccase ] apply with \ ] ] and as in theorem [ thm : infoplusnoisegaussiancase ] , we deal with potential measurability issues concerning the in the proof .our theorem is really that we can find a random variable that goes to 0 with probability 1 and dominates the random element outer - probability statement .this theorem generalizes theorem [ thm : infoplusnoisegaussiancase ] in two ways .the `` spherical '' case , detailed in corollary [ coro : sphericalnoisekernels ] , is a more general version of theorem [ thm : infoplusnoisegaussiancase ] limited to gaussian noise .this is because the gaussian setting corresponds to and .however , assuming `` only '' concentration inequalities allows us to handle much more complicated structures for the noise distribution .some examples are given below .we also note that if the s ( i.e. , the signal part of the s ) are sampled , for instance , from a fixed manifold of finite euclidean diameter , the conditions on are automatically satisfied , with being the euclidean diameter of the corresponding manifold .another generalization is `` geometric '' : by allowing to vary with , we move away from the spherical geometry of high - dimensional gaussian vectors ( and generalizations ) , to a more `` elliptical '' setting . hence , our results show clearly the potential limitations and the structural assumptions that are made when one assumes gaussianity of the noise .theorem [ thm : infoplusnoiseconccase ] and corollary [ coro : sphericalnoisekernels ] show that the gaussian - like results of theorem [ thm : infoplusnoisegaussiancase ] are not robust against a change in the geometry of the noise .we note however that if is independent of and , , so all the noise models have the same covariance but they may yield different approximating matrices and hence different spectral behavior for our information models .however , the spherical results have the advantage of having simple interpretations . in the setting of corollary [ coro : sphericalnoisekernels ] , if we assume that and are uniformly bounded ( in ) over the class of functions we consider , we can replace the diagonal of by and have the same approximation results . then the `` new '' is a kernel matrix computed from the signal part of the data with the new kernel . to make our result more concrete , we give a few examples of distributions for which the concentration assumptions on are satisfied : * gaussian random variables , for which we have .we refer to ledoux [ ( ) , theorem 2.7 ] for a justification of this claim .* vectors of the type where is uniformly distributed on the unit ( -)sphere in dimension .theorem 2.3 in shows that our assumptions are satisfied , with , after noticing that a 1-lipschitz function with respect to euclidean norm is also 1-lipschitz with respect to the geodesic distance on the sphere .* vectors , with uniformly distributed on the unit ( -)sphere in and with having bounded operator norm .* vectors of the type , , where is uniformly distributed in the unit ball or sphere in .( see ledoux [ ( ) , theorem 4.21 ] which refers to as the source of the theorem . ) in this case , depends only on .* vectors with log - concave density of the type , with the hessian of satisfying , for all , , where is the real that appears in our assumptions .see ledoux [ ( ) , theorem 2.7 ] for a justification .* vectors distributed according to a ( centered ) gaussian copula , with corresponding correlation matrix , , having bounded .we refer to for a justification of the fact that our assumptions are satisfied .[ if has a gaussian copula distribution , then its entry satisfy , where is multivariate normal with covariance matrix , being a correlation matrix , that is , its diagonal is 1 . here is the cumulative distribution function of a standard normal distribution .taking gives a centered gaussian copula .] this last example is intended to show that the result can handle quite complicated and nonlinear noise structure .we note that to justify that the assumptions of the theorem are satisfied , it is enough to be able to show concentration around the mean or the median , as proposition 1.8 in makes clear .the reader might feel that the assumptions concerning the boundedness of the s will be limiting in practice .we note that the same proof essentially goes through if we just require that s belong to the interval \maxso this boundedness condition is here just to make the exposition simpler and is not particularly limiting in our opinion .on the other hand , we note that our conditions allow dependence in the s and are therefore rather weak requirements . finally , the theorem as stated is for a fixed , though the class of functions we are considering might vary with and through the influence of .the proof makes clear that could also vary with and .we discuss in more details the necessary adjustments after the proof .proof of theorem [ thm : infoplusnoiseconccase ] we use the notation and to denote probability conditional on .we call .let us also call ; similarly , denotes probability conditional on .we call . we will start by working conditionally on and eventually decondition our results .we assume from now on that the we work with is such that .note that by assumption and also .the main idea now is that , in a strong sense , where . to show this formally , we write =2\alpha_{i , j}+\beta_{i , j } , \ ] ] where and our aim is to show that , as and tend to infinity , - _ on _ .note that if , .clearly , since we assumed that , we see that the function is lipschitz ( with respect to euclidean norm ) , with lipschitz constant smaller than , when is in . also , since , , where the expectation is conditional on .hence , our concentration assumptions on imply that \bigr)^b\bigr ) .\ ] ] therefore , if we use a simple union bound , we get \bigr)^b\bigr ) .\ ] ] in particular , if we pick , for , , we see that \bigr)^b\bigr)\\ & = & 2c \exp ( -2(\log n)^{\varepsilon})\rightarrow0.\end{aligned}\ ] ] since and since the latter goes to 0 , we have , unconditionally , - _ on _ .we see that if and are vectors in , the map is , by the triangle inequality .therefore , using propositions 1.11 and 1.7 in [ and using the fact that as and is continuous when using the latter ] , we conclude that if now , and if , where is a constant which does not depend on .so we conclude that unconditionally , if note also that under our assumptions , .recall that we aim to show that let us first work on using the fact that , and therefore , if we choose and , we see that the previous equation becomes therefore , if we can show that goes to 0 in probability , we will have in probability . using the concentration result given in ( [ eq : conditionalconcentration ] ) , in connection with proposition 1.9 in anda slight modification explained in , we have \\[-8pt ] & \leq & \frac{r^2_{\infty } ( p)}{p } \frac{32 c}{b(c_0)^{2/b}}\gamma(2/b)=r^2_{\infty}(p)\frac { \kappa_b}{p } .\nonumber\end{aligned}\ ] ] using our assumption that remains bounded , we see that therefore , for some independent of , with probability going to 1. our assumptions also guarantee that , so we conclude that , for a constant independent of , using ( [ eq : controlgammaijsquare ] ) , we have the deterministic inequality so we can finally conclude that with high probability putting all these elements together , we see that when we can find a constant such that in other words , \bigr|>ku_p \bigr)\rightarrow0 .\ ] ] this establishes ( a strong form of ) the first part of the theorem , that is , ( [ eq : interpointdistanceellipcase ] ) .- _ second part of the theorem _ [ _ equation _ ( [ eq : mainresconcellipcase ] ) ] . to get to the second part, we recall that , assuming that is -lipschitz on an interval containing , we have let us define , for given , the event \ } , \ ] ] and the random element when is true , all the pairs are in : the part concerning is obvious , and the one concerning comes from the definition of .so when is true , we also have let us now consider the random variable such that on and otherwise , so .our remark above shows that now , we see from our assumptions about , ( [ eq : controlmaxerrorgeomapproxentrywise ] ) and the fact that , that for any , .so we have also , with probability tending to 1 , so we can conclude that hence , we also have where this statement might have to be understood in terms of outerprobabilities hence the instead of .[ see , page 258 . in plain english, we have found a random variable , latexmath:[ ] and we then consider the random matrices with entry let us call the matrix with entry we have , for any and , we note that under our assumptions , we also have , with high probability , and uniformly in in . therefore , when , the result is also valid if we replace the diagonal of by which case the new approximating matrix is the kernel matrix computed from the signal part of the data . furthermore , the same argument shows that we get a valid operator norm approximation of by this `` pure signal '' matrix as soon as tends to 0. the same measurability issues as in the previous theorems might arise here and the statement should be understood as before : we can find a random variable going to 0 in probability that is larger than the random element . finally , let us note that once again the theorem is stated for a fixed [ and hence for an essentially fixed ( with ) class of functions , though some changes in this class might come from varying , but the proof allows us to deal with a varying .the adjustments are very similar to the ones we discussed after the proof of theorem [ thm : infoplusnoiseconccase ] and we leave them to the interested reader .proof of theorem [ thm : infoplusnoiseconccasedotproductskernels ] the proof is quite similar to that of theorem [ thm : infoplusnoiseconccase ] , so we mostly outline the differences and use the same notation as before .we now have to focus on the analysis of is entirely similar to our analysis of in the proof of theorem [ thm : infoplusnoiseconccase ] .the key remark now is that as function of , when , it is , with the new definition of , -lipschitz with respect to euclidean norm .so we immediately have , with the new definition of : if , and , for some which does not depend on , now , since , we conclude as before that on the other hand , using the fact that , and analyzing the concentration properties of in the same way as we did those of , we conclude that if , we can find a constant such that and similar arguments , relying on the fact that is obviously 1-lipschitz with respect to euclidean norm , also lead to the fact that therefore , we can find , greater than 1 without loss of generality , such that we can therefore conclude that if , then both and tend to 0 . therefore , under our assumptions , so we have shown the first assertion of the theorem .the final step of the proof is now clear : we have , for all , when for all , and are in .this event happens with probability going to 1 under our assumptions .so following the same approach as before and dealing with measurability in the same way , we have , with probability going to 1 , so we conclude that from this statement , we get in the same manner as before , as before , the equations above show that if , the same approximation result holds , now with a varying .our aim in giving approximation results is naturally to use existing knowledge concerning the approximating matrix to reach conclusions concerning the information kernel matrices that are of interest here . in particular, we have in mind situations where the `` signal '' part of the data , that is , what we called in the theorems , and [ or , with being as defined in theorems [ thm : infoplusnoisegaussiancase ] or [ thm : infoplusnoiseconccase ] ] are such that the assumptions of theorems 3.1 or 5.1 in are satisfied , in which case we can approximate the eigenvalues of by those of the corresponding operator in . in this setting the matrix , which is normalized so its entries are of order has a nondegenerate limit , which is why we considered for our kernel matrices the normalization .[ this normalization by makes our proofs considerably simpler than the ones given in . ]another potentially interesting application is the case where the signal part of the data is sampled i.i.d . from a manifold with bounded euclidean diameter , in which case our results are clearly applicable .the practical interest of the theorems we obtained above lie in the fact that the frobenius norm is larger than the operator norm , and therefore all of our results also hold in operator norm .now we recall the discussion in el karoui [ ( ) , section 3.3 ] , where we explained that consistency in operator norm implies consistency of eigenvalues and consistency of eigenspaces corresponding to separated eigenvalues [ as consequences of weyl s inequality and the davis kahane theorem see and ] .theorems [ thm : infoplusnoisegaussiancase ] , [ thm : infoplusnoiseconccase ] , [ thm : infoplusnoiseconccasedotproductskernels ] therefore imply that under the assumptions stated there , the spectral properties of the matrix can be deduced from those of the matrix . in particular , for techniques such as kernel pca, we expect , when it is a reasonable idea to use that technique , that will have some separated eigenvalues , that is , a few will be large and there will be a gap in the spectrum . in that setting , it is enough to understand , which corresponds , if , to a pure signal matrix , with a possibly slightly different kernel , to have a theoretical understanding of the properties of the technique .for instance , if , if the assumptions underlying the first - order results of are satisfied for , the ( first - order ) spectral properties of are the same as those of , and hence of the corresponding operator in .our analysis reveals a very interesting feature of the gaussian kernel , that is , the case where , for some : when theorem [ thm : infoplusnoisegaussiancase ] or corollary [ coro : sphericalnoisekernels ] ( i.e. , theorem [ thm : infoplusnoiseconccase ] with ) apply , the eigenspaces corresponding to separated eigenvalues of the signal kernel matrix converge to those of the pure signal matrix .this is simply due to the fact that in that setting , if is the matrix such that a rescaled version of the `` pure signal '' matrix with entry , we have this latter statement is a simple consequence of the fact that is a diagonal matrix with entries on the diagonal , and therefore its operator norm goes to 0 .on the other hand , clearly has the same eigenvectors as the pure signal matrix .hence , because the eigenspaces of are consistent for the eigenspaces of corresponding to separated eigenvalues , they are also consistent for those of .( we note that our results are actually stronger and allow us to deal with a collection of matrices with varying and not a single , as we just discussed .this is because we can deal with approximations over a collection of functions in all our theorems . ) because of the practical importance of eigenspaces in techniques such as kernel pca , these remarks can be seen as giving a theoretical justification for the use of the gaussian kernel over other kernels in the situations where we think we might be in an information setting , and the noise is spherical . on the other hand, underestimates the large eigenvalues of because , and obviously . using weyl s inequality[ see ] , we have , if we denote by is the eigenvalue of the symmetric matrix , since the right - hand side goes to 0 asymptotically , the eigenvalues of ( the `` pure signal '' matrix ) that stay asymptotically bounded away from 0 are underestimated by the corresponding eigenvalues of .when the noise is elliptical , that is , s are not all equal to 1 , the `` new '' matrix we have to deal with has entries so it can be written in matrix form where is a diagonal matrix with . by the same arguments as above , in probability , but now does not have the same eigenvectors as the pure signal matrix .so in this elliptical setting if we were to do kernel analysis on , we would not be recovering the eigenspaces of the pure signal matrix .in various parts of statistics and machine learning , it has been argued that laplacian matrices should be used instead of kernel matrices .see , for instance , the very interesting , where various spectral properties of laplacian matrices have been studied , under a `` pure '' signal assumption in our terminology .for instance , it is assumed that the data is sampled from a fixed - dimensional manifold . in light of the theoretical and practical success of these methods ,it is natural to ask what happens in the information case .there are several definitions of laplacian matrices .a popular one [ see , e.g. , the work of , among other publications ] , is derived from kernel matrices : given a kernel matrix , the laplacian matrix is defined as when our theorems [ thm : infoplusnoiseconccase ] or [ thm : infoplusnoiseconccasedotproductskernels ] apply , we have seen that , for relevant classes of functions , in probability .let us now focus on the case of a single function . if we call the laplacian matrix corresponding to , we have we conclude that in probability; we can therefore deduce that the spectral properties of the laplacian matrix from those of , which , when , is a `` pure signal '' matrix , where we have slightly adjusted the kernel . here again , the gaussian kernel plays a special role , since when we use a gaussian kernel , is a scaled version of the laplacian matrix computed from the signal part of the data .finally , other versions of the laplacian are also used in practice . in particular , a `` normalized '' versionis sometimes advocated , and computed as , if is the diagonal of the matrix defined above .we have just seen that in probability and in probability .therefore , if the entries of are bounded away from 0 with probability going to 1 , we conclude that stays bounded with high probability and so once again , understanding the spectral properties of essentially boils down to understanding those of , which is , in the spherical setting where , a `` pure signal '' matrix . in the case of the gaussian kernel , is equal to the normalized laplacian matrix computed from the `` pure signal '' data .[ [ the - question - of - centering ] ] the question of centering + + + + + + + + + + + + + + + + + + + + + + + + + in practice , it is often the case that one works with centered versions of kernel matrices : either the row sums , the column sums or both are made to be equal zero .these centering operations amount to multiplying ( resp ., on the right , left or both ) our original kernel matrix by the matrix , where is the -dimensional vector whose entries are all equal to 1 .this matrix has operator norm 1 , so when is such that , the same is true for and , where and are either 0 or 1 .this shows that our approximations are therefore also informative when working with centered kernel matrices .our results aim to bridge the gap in the existing literature between the study of kernel random matrices in the presence of pure low - dimensional signal data [ see , e.g. , ] and the case of truly high - dimensional data [ see ] .our study of information kernel random matrices shows that , to first order , kernel random matrices are somewhat `` spectrally robust '' to the corruption of signal by additive high dimensional and spherical noise ( whose norm is controlled ) . in particular , they tend to behave much more like a kernel matrix computed from a low - dimensional signal than one computed from high - dimensional data .some noteworthy results include the fact that dot - product kernel random matrices are , under reasonable assumptions on the kernel and the `` signal distribution '' spectrally robust for both eigenvalues and eigenvectors .the gaussian kernel also yields spectrally robust matrices at the level of eigenvectors , when the noise is spherical .however , it will underestimate separated eigenvalues of the gaussian kernel matrix corresponding to the signal part of the data . on the other hand ,euclidean distance kernel random matrices are not , in general , robust to the presence of additive noise . as our results show , under reasonably minimal assumptions on both the noise , the kernel and the signal distribution , a euclidean distance kernel random matrix computed from additively corrupted data behaves like another euclidean distance kernel matrix computed from another kernel : in the case of spherical noise , it is a shifted version of , the shift being twice the norm of the noise . for spherical noise ,this is bound to create ( except for the gaussian kernel ) potentially serious inconsistencies in both estimators of eigenvalues and eigenvectors , because the eigenproperties of the kernel matrix corresponding to the function are in general different from that of the kernel matrix corresponding to the function .the same remarks apply to the case of elliptical noise , where the change of kernel is not deterministic and even more complicated to describe and interpret .our study also highlights the importance of the implicit geometric assumptions that are made about the noise .in particular , the results are qualitatively different if the noise is spherical ( e.g. , multivariate gaussian ) or elliptical ( e.g. , multivariate ) .interpretation is more complicated in the elliptical case and a number of nice properties ( e.g. , robustness or consistency ) which hold for spherical noise do not hold for elliptical noise .we note that our study suggests that simple practical ( and entrywise ) corrections could be used to go from the `` signal '' situation to an approximation of the `` pure signal '' situation . those would naturally depend on the noise geometry and what information practitioners have about it .our results can therefore be seen as highlighting ( from a theoretical point of view ) the strength and limitations of techniques which rely on kernel random matrices as a primary element in a data analysis .we hope they shed light on an interesting issue and will help refine our understanding of the behavior of kernel techniques and related methodologies for high - dimensional input data .the author would like to thank peter bickel for suggesting that he consider the problem studied here and in general for many enlightening discussions about topics in high - dimensional statistics .he would also like to thank an anonymous referee and associate editors for raising interesting questions which contributed to a strengthening of the results presented in the paper .belkin , m. and niyogi , p. ( 2003 ) .laplacian eigenmaps for dimensionality reduction and data representation ._ neural comput . _ * 15 * 13731396 .available at http://www.mitpressjournals.org/doi/abs/10.1162/089976603321780317 .zwald , l. , bousquet , o. and blanchard , g. ( 2004 ) .statistical properties of kernel principal component analysis . in _ learning theory_. _ lecture notes in computer science _ * 3120 * 594608 .springer , berlin . | kernel random matrices have attracted a lot of interest in recent years , from both practical and theoretical standpoints . most of the theoretical work so far has focused on the case were the data is sampled from a low - dimensional structure . very recently , the first results concerning kernel random matrices with high - dimensional input data were obtained , in a setting where the data was sampled from a genuinely high - dimensional structure similar to standard assumptions in random matrix theory . in this paper , we consider the case where the data is of the type `` information . '' in other words , each observation is the sum of two independent elements : one sampled from a `` low - dimensional '' structure , the signal part of the data , the other being high - dimensional noise , normalized to not overwhelm but still affect the signal . we consider two types of noise , spherical and elliptical . in the spherical setting , we show that the spectral properties of kernel random matrices can be understood from a new kernel matrix , computed only from the signal part of the data , but using ( in general ) a slightly different kernel . the gaussian kernel has some special properties in this setting . the elliptical setting , which is important from a robustness standpoint , is less prone to easy interpretation . . |
in this contribution we describe experimental variations on the so - called coffee - ring effect .if you have spilled a drop of coffee and left it to dry , then you might have observed a ring - shaped stain .specifically , the stain is darker near the drop edges compared to the middle ( fig .[ droppin]c ) .this phenomenon is the coffee - ring effect ; it is produced by the interplay of fluid dynamics , surface tension , evaporation , diffusion , capillarity , and more .briefly , as a drop evaporates , its edges easily become pinned and can not recede towards the middle of a drop , i.e. , the diameter of a pinned drop does not decrease ( fig .[ droppin]a ) .this effect is perhaps surprising considering that fluid regions near the edges of a drop are thinner than in the middle .thus , fluid flows from the middle of the drop to the edge of the drop to replenish evaporated water .this flow readily carries suspended particles , moving them from the middle of the drop to its edges , thus producing a coffee - ring .why care about the coffee - ring effect ?a drop of evaporating water is a complex , difficult - to - control , non - equilibrium system . along with capillary flow ,the evaporating drop features a spherical - cap - shaped air - water interface and marangoni flows induced by small temperature differences between the top of the drop and the contact line .thus , to understand the coffee - ring effect , one must understand pinning effects , fluid dynamics , particle - substrate interactions , substrate - fluid interactions , and more .indeed , intellectual challenges have motivated us to understand this complex , far - from - equilibrium system , and the effects of each of these parameters . of course, if the coffee - ring effect were only present in coffee and tea , its practical importance would be minimal .in fact , the coffee ring effect is manifest in systems with diverse constituents ranging from large colloids to nanoparticles to individual molecules ( e.g. , salt ) . due to its ubiquity, the coffee - ring manages to cause problems in a wide range of practical applications which call for uniform coatings , such as printing , genotyping , and complex assembly .paint is another system susceptible to the coffee - ring effect . to avoid uneven coatings ,paints often contain a mixture of two different solvents .one is water , which evaporates quickly and leaves the pigment carrying particles in a second , thicker solvent .the particles are unable to rearrange in this viscous solvent and are then deposited uniformly .unfortunately , this second solvent also evaporates relatively slowly ( one reason why it might be boring to watch paint dry ) .while a number of schemes to avoid the coffee - ring effect have been discovered , these approaches typically involve significant modifications of the system .thus , the discovery of relatively simple ways to avoid the coffee - ring effect and control particle deposition during evaporation could greatly benefit a wide range of applications . to this end, we asked ( and answered ) a question : does particle _ shape _ affect particle deposition ? at first glance , it may appear that shape should not matter .colloidal particles of any shape are susceptible to the radially outward flow of fluid that drives the coffee - ring effect . however , changing particle shape dramatically changes the behavior of particles on the air - water interface .in fact , smooth anisotropic ellipsoids deform the air - water interface while smooth isotropic spheres do not .deforming the air - water interface , in turn , induces a strong interparticle capillary attraction between ellipsoids .this capillary attraction causes ellipsoids to form a loosely - packed network that can cover the entire air - water interface , leaving ellipsoids much more uniformly distributed when evaporation finishes ( fig .[ droppin ] b ) .conversely , spheres pack densely at the drop s edge , producing a coffee - ring when evaporation has finished ( fig .[ droppin ] c ) .thus , particle _ shape _ can produce uniform coatings .the remainder of this review is organized as follows .first , we discuss the different interfacial properties of spheres and ellipsoids , as well as the methods to make anisotropic particles .then , we discuss our investigation of particle behavior in evaporating sessile drops and the coffee - ring effect .much of this work is described in a recent publication .in particular , we demonstrate that particle shape strongly affects the deposition of particles during evaporation .next , we investigate the role of particle shape in evaporating drops in confined geometries , and we show how to extract the bending rigidity of the membranes formed by particles adsorbed on the air - water interface .much of this work is described in another publication .finally , we shift focus to discuss the effects of surfactants on evaporating colloidal drops .we show that surfactants lead to a radially inward flow on the drop surface , which creates a marangoni eddy , among other effects , which leads to differences in drying dynamics .some of this work was published recently . as a whole ,this review attempts to present these experiments in a unified fashion .at small packing fractions , i.e. , outside the range which would lead to formation of crystalline or liquid crystalline phases , the diffusion and hydrodynamics of spheres and ellipsoids are only modestly different .further , both spheres and ellipsoids will adsorb onto the air - water interface ; the binding energy of micron - sized particles to the air - water interface depends primarily on the interfacial area covered by the particle and the contact angle , quantities which are similar for spheres and ellipsoids .the binding energy for a micron - sized particle is , where is the boltzmann constant and is temperature .once adsorbed onto the air - water interface , however , the behaviors of spheres and ellipsoids are dramatically different .anisotropic particles deform interfaces significantly , which in turn produces very strong interparticle capillary interactions .these deformations have been predicted and have been experimentally observed via techniques such as ellipsometry and video microscopy .two particles that deform the air - water surface will move along the interface to overlap their deformations and thus minimize total system ( particles plus interface ) energy .this preference at the interface effectively produces a strong interparticle attraction , which has been measured to be hundreds of thousands times greater than thermal energy for micron size particles .the interfacial deformations can be understood from expanded solutions of the young - laplace equation .the young - laplace equation minimizes the energy associated with a surface , and thus relates the pressure difference across the surface to the curvature of the surface .specifically , the young - laplace equation is a force balance statement : , where is surface tension , is mean curvature of the interface , is the pressure in the air , and is the pressure in the water . for length scales smaller than the capillary length ( i.e. , the length scale at which the laplace pressure from surface tension is equal to the hydrostatic pressure due to gravity , mm for water ) , gravitational effects can be ignored , and the pressure drop across the surface is zero , implying that the mean curvature everywhere is zero .the mean curvature can be expressed as , where is the laplacian and is the height of the surface .thus , .when a particle attaches to the air - water interface , boundary conditions are created for the air - water interface .theoretically , one seeks to solve the young - laplace equation with these boundary conditions . in this case, it is useful to first rewrite the young - laplace equation in polar coordinates , ( see fig . [ yleq ] a - c ) .this problem is similar to potential problems in electrostatics and can be solved by separation of variables , i.e. , with the ansatz .substitution for leads to . since this equation must hold as and varied independently , each term in the equation must equal the same constant , which is leadingly termed .thus , and , with solutions and , where , , and are determined by boundary conditions .the monopole term is only non - zero when the height of the interface near the particle is uniformly lowered ( or raised ) ( fig .[ yleq ] c ) .the monopole term is only stable for the particle in an external field ( e.g. , gravity ) ; however , for typical colloidal particles the gravitational buoyancy forces are not significant and this term is zero .the dipole term corresponds to a situation wherein the height of the interface is lower on one side of the particle compared to the opposite side ( fig .[ yleq ] d ) .thus , this situation can be quickly relaxed by rotating the particle , i.e. , lowering the interface on the high side and raising it on the low side .the dipole term is only stable when an external torque is applied ; since no external torques act on the particles , this term also is zero .therefore , the lowest allowed term is the quadrupole term ( ) , i.e. , ( fig . [ yleq ] e ) .notice , this derivation has not mentioned anisotropic boundary conditions .in fact , the quadruploar form for is applicable in general to any deformation of the air - water interface ( absent external forces and torques ) that arises at the particle surface .the air - water interface can be deformed on a sphere , if the three - phase contact line is heterogeneously pinned ( see fig . [ yleq ] f ) .this effect produces a quadrupolar profile of the interfacial height .however , the linear size of deformation , i.e. , , the maximum value of minus the minimum value of , from contact - line - roughness is typically much smaller than the linear size of the deformation from shape - based - roughness ( for example , see reference ) . of course , if one applies young s conditions for the three - phase contact line on the solid particle , one `` ideally '' obtains a circle contact line on the sphere and a much more complicated shape on an anisotropic particle such as an ellipsoid . on the ellipsoid , this leads to height variations of that are of order the particle size .the interaction potential between two particles is related to the excess surface area created by these deformations . for a single particle ( ) ,the deformation energy is , where is the excess surface area due to interfacial deformation , which is proportional to the deformation size squared , i.e. , the interaction energy of two particles ( ) is , where is the excess surface area due to both particle and particle ( which is dependent on the particle positions and orientations ) , and and are the excess areas due to particle and alone . for smooth spheres , since the interface is not significantly deformed , so . for ellipsoids ( or rough spheres ) , ^{-4}$ ] , where and are the angular orientations of ellipsoids and .the attractive strength decays as and depends on the coefficient term , which , in turn , depends on the deformation size squared , i.e. , .thus , the strength of this attraction ultimately depends strongly on the size of the deformation at the surface of the particle .for example , micron diameter particles that induce interfacial deformations of nm and an interparticle separation of microns will produce an attraction with strength . for micron - sized ellipsoids ,the binding energy from capillary attraction is .to summarize , spheres and ellipsoids behave similarly in bulk fluid and are bound to the air water interface by similarly large binding energies . however , on the air - water interface their behavior is dramatically different .anisotropic particles deform the air - water interface , producing a quadrupolar attraction that is energetically strong ( ) . to understand how particle shape impacts particle deposition, we need particles with different shapes .we utilize micron - sized polystyrene spheres ( invitrogen ) , similar to the particles used in previous experiments ( e.g. , ) , and we simply modify their shape by stretching them asymmetrically to different aspect ratios . the procedures to make particles have been described previously , but for completeness we briefly discuss these methodologies below . to create ellipsoidal particles , m diameter polystyrene particlesare suspended in a polyvinyl alcohol ( pva ) gel and are heated above the polystyrene melting point ( ) , but below the pva melting point ( ) .polystyrene melts in the process , but the pva gel only softens .the pva gel is then pulled so that the spherical cavities containing liquid polystyrene are stretched into ellipsoidal cavities .when the pva gel cools , polystyrene solidifies in the distorted cavities and becomes frozen into an ellipsoidal shape .the hardened gel dissolves in water , and the pva is removed via centrifugation .each sample is centrifuged and washed with water at least times .each iteration of this process creates ellipsoidal particles in suspensions .the particles are charge - stabilized , and the resultant suspensions are surfactant - free .snapshots of experimental particles are shown in the insets of fig .[ droppin ] b , c. the aspect ratio polydispersity is . to ensure the preparation process does not affect particle deposition , our spheres undergo the same procedure , absent stretching .importantly , in order to ensure the pva was not affecting our results , we performed a separate set of experiments investigating the effects of pva on evaporating drops . in these experimentsthe pva weight percent was carefully controlled .we found that if a sample contains more than pva by weight , then the contact line of the drying drop depins very quickly after the drop is placed on a glass slide .however , in samples with less than pva by weight , the contact line behavior of the drying drop is identical to the contact line behavior in drops without pva . to confirm that small amounts of pva do not affect the deposition of spheres , we added pva ( by weight ) to a suspension of spheres . during evaporation ,the contact line remains pinned , and the spheres exhibit the coffee ring effect .further , when ellipsoids are diluted by a factor of ( and thus the pva weight percent is decreased by a factor of to an absolute maximum of ) , the spatially uniform deposition of ellipsoids persists .understanding why ellipsoids are deposited uniformly first requires that we characterize the evaporation process , i.e. , we quantify the spatio - temporal evaporation profile of the suspensions .first , we are interested in the evaporation rate . to this end, we directly measure the drop mass of different suspensions ( in volume , mm in radius , ) during evaporation ( fig .[ figs1cr ] a ) .( in order to improve the accuracy of the reported evaporation rate , we utilized large - volume drops . ) for all suspensions ( drops of sphere suspension , drops of ellipsoid suspension , and drops of water absent colloid ) , the mass of each drop decreases linearly in time with very similar mass rates - of - change ( / s ) .this bulk evaporation behavior for all suspensions is consistent with steady - state vapour - diffusion - limited evaporation of spherical - cap - shaped drops with pinned contact lines .+ next , we quantified the contact line evolution during drying , i.e. , we observed when the contact line depins .specifically , we measured the radius of the drops ( ) during evaporation by video microscopy ( fig .[ figs1cr ] b ) .the time at which evaporation finishes , , is clearly indicated in fig . [ figs1cr ] b as the time when the drop radius shrinks to zero . for all samples, we observed the radius decrease by less than until ; i.e. , the contact line remains pinned for the vast majority of the evaporation , regardless of particle shape .note , the contact line in drops containing ellipsoids does partially depin around ; however , it does not completely depin until .these control experiments demonstrate that contact line behavior , capillary flow , and evaporation rates are independent of suspended particle shape .thus , to produce qualitatively different deposits , the microscopic behaviors of individual spheres and ellipsoids in the droplets must differ .the uniform deposition of ellipsoids after evaporation ( fig .[ droppin ] b ) is especially striking when compared to the heterogeneous `` coffee ring '' deposition of spheres ( fig .[ droppin ] c ) in the same solvent , with the same chemical composition , and experiencing the same capillary flows ( fig .[ fig1cr2 ] a ) . to quantify the particles deposition shown in fig .[ droppin ] b and c , we determined the areal number fraction of particles deposited as a function of radial distance from the drop center ( fig .[ fig1cr2 ] b ) . in detail , utilizing video microscopy and particle tracking algorithms , we counted the number of particles , , in an area set by the annulus bounded by radial distances and from the original drop center ; here is m . the areal particle density , with .to facilitate comparisons between different samples , and eliminate small sample - to - sample particle density differences , we normalized by the total number of particles in the drop , n. further , to we report as a function of , where is the drop radius , to eliminate small sample - to - sample differences in drop radii .dilute suspensions ( ) are utilized to improve image quantification . for spheres ( ) , is times larger at than in the middle of the drop . conversely , the density profile of ellipsoidal particles is fairly uniform as a function of ( there is a slight increase at large ) . as particle shape anisotropyis increased from to , the peak in at large decreases .the coffee - ring effect persists for particles marginally distorted from their original spherical shape ( and ) , but particles that are slightly more anisotropic ( ) are deposited more uniformly . to further quantify the sharply peaked coffee - ring effect of spheres and the much more uniform deposition of the ellipsoids, we calculate and plot ( fig .[ fig1cr2 ] c ) , where is the maximum value of ( typically located at ) and is the average value of in the middle of the drop ( ) . for spheres , .as aspect ratio increases slightly ( and ) decreases to and , respectively . for ellipsoids , is more than ten times smaller than spheres .as continues to increase above , continues to decrease , albeit at a much lower rate .note , was observed to be largely independent of initial volume fraction , i.e. , fluctuated by approximately as volume fraction changed between and .when drops with very large packing fractions evaporate , the drop surface becomes saturated with ellipsoids .however , deposition in this limit is difficult to quantify , as at high volume fractions it is difficult to measure the local particle density .thus , while the particles that can not attach to the interface are likely transported to the drop edge , it is difficult to demonstrate that this effect occurs . an experimental snapshot after evaporation of a drop of ellipsoids ( ) initially suspended at volume fraction shows that overall the coffee - ring effect is avoided , but the local density can not be extracted ( fig .[ fig1cr2 ] d and e ) .an image of the final distribution of spheres evaporated from a suspension with initial packing fraction is included for comparison .our observations thus far imply that micron - sized grains in a cup of coffee are relatively spherical . to confirm or refute this hypothesis , we prepared a microscope slide full of diluted coffee .this coffee came from the lab - building coffee machine ( filterfresh ) , which passes through a paper filter after a relatively short brew time ( seconds ) . while we did not `` fully '' characterize the shape of the grains we observed , qualitatively , they appeared spherical on the micron - size scale ( see fig .[ coffeepic ] a ) .thus , a suspension of polystyrene spheres really is an especially good model of a cup of coffee .the evidence suggest that the same radially outward flows are present in drops containing either spheres or ellipsoids .the deposition of spheres and ellipsoids after drying , however , is very different . in order to understand the origin of these differences better, we carried out a battery of experiments focused on the behaviors of spheres and ellipsoids on the air - water surface .snapshots from video microscopy show that both spheres ( fig .[ fig2cr ] a - d ) and ellipsoids ( fig .[ fig2cr ] e - h ) are carried to the drop s edges . to quantify this effect ,the average areal particle density close to the contact line , , was measured as a function of time ( fig .[ fig2cr ] i ) . for spheres, increases linearly until evaporation is complete , with a slope of s .conversely , the areal density of ellipsoids near the contact line stops growing at , and for , increases with a slope of s .this slope for ellipsoids is less than the slope for spheres , despite similar evaporation rates , capillary flows , and contact line behaviors .thus , ellipsoid density at the drop edge grows at a slower rate than sphere density .next , we note that both spheres and ellipsoids strongly prefer adsorption to interface than life in the bulk drop .further , our experiments with ellipsoids and spheres , and previous experiments with spheres , suggest that of the particles adsorb to the air - water interface in the `` central / middle '' regions of the drop .thus most particles move toward the drop edges , and the relative drying behaviors of ellipsoids and spheres must be controlled by their behaviors near the drop edge .to study this issue we first determine where ellipsoids adsorb on the air - water interface , i.e. , we measure the number of ellipsoids that adsorb on the air - water interface as a function of radial position . the areal number density ( give symbol ) of ellipsoids on the air - water interface versus radial distance , ( symbol ) at a time immediately before the drop edge depins is given in fig .[ coffeepic ] b. the majority of particles are deposited within microns of the drop s edge ( at microns ) .approximately of the ellipsoid particles adsorb on the air - water interface in this region near the drop s edge .the properties of this interfacial region and the mechanisms by which particles attach to and move within this interfacial region play a key role in the drying process .what actually happens at the drop s edge ?experimental snapshots of of particles moving in the region within m of the drop contact line confirm that while spheres pack closely at the edge ( fig .[ fig3cr ] a ) , ellipsoids form `` loosely '' packed structures ( fig .b ) , which prevent particles from reaching the contact line . particles with and pack at higher area fractions than ellipsoids with , resulting in larger values of for and and producing the small peak in at for .the ellipsoid particle structures on the air - water interface appear to be locally arrested or jammed , i.e. , particles do not rearrange .once an ellipsoid joins the collective structure , its position relative to other ellipsoids typically changes by less than nm ( lower limit of our resolution ) , and the overall particle structure rearranges , for the most part , only when new particles attach to the interface .images of particles near the drop s contact line ( fig .[ fig3cr ] b ) reveal that unlike spheres , which are carried from the bulk all the way to the contact line ( fig .[ fig3cr ] a ) , most ellipsoids adhere to the loosely - packed structures at the air - water interface before they reach the three - phase contact line at the drop edge .this capillary attraction has been characterized in prior experiments as long - ranged and very strong . to understand the different behaviors of spheres and ellipsoids at the edge of drying drops, it is instructive to observe some individual particle trajectories .the trajectory of a single sphere is highlighted in fig .[ spheretraj ] a - d .spheres ( like the one highlighted in fig .[ spheretraj ] a - d ) are pushed through the bulk fluid towards the drop s edge .when spheres reach the drop s edge , their progress is halted by a wall of spheres already at the drop s edge .spheres then pack densely , and can not rearrange as they jam into the ring configuration .this behavior is demonstrated quantitatively for a few typical spheres by plotting the distance ( ) between the sphere and the drop s edge versus time ( fig .[ spheretraj ] e ) .conversely , when ellipsoids reach the drop s edge , they pack loosely on the air - water interface ( fig .[ spheretraj ] f - i ) .notice , ellipsoids at the drop s edge do not necessarily halt the progress of other migrating ellipsoids that arrive at later times .this can be seen in fig .[ spheretraj ] f - h , as an ellipsoid approaches the drop s edge ( fig .[ spheretraj ] f ) , passes underneath a cluster of ellipsoids on the air - water interface ( fig .[ spheretraj ] g ) , and eventually adsorbs on the air - water interface near the drop s edge ( fig .[ spheretraj ] h ) .as evaporation continues , ellipsoids can move along the surface of the drop towards the drop s center ( fig .[ spheretraj ] i ) .this behavior is demonstrated quantitatively for a few typical ellipsoids by plotting versus time ( fig . [ spheretraj ] j ) .if the air - water interface is not saturated with ellipsoids when the drop s edge depins , then the networks of ellipsoids are compressed as they are pushed towards the drop s center .the loosely - packed configurations formed by ellipsoids on the interface are structurally similar to those seen in previous experiments of ellipsoids at flat air - water and water - oil interfaces .they produce a surface viscosity that is much larger than the suspension bulk viscosity , facilitating ellipsoid resistance to radially outward flows in the bulk .note , spheres also adsorb onto the interface during evaporation .however , spheres do not strongly deform the interface , and they experience a much weaker interparticle attraction than ellipsoids ; therefore , the radially outward fluid flows in the bulk and interface easily push spheres to the drop s edge . in order to quantify the ability of interfacial aggregates of ellipsoids to resist bulk flow , we calculated the boussinesq number , b , for ellipsoids with .specifically , b is the ratio of the surface drag to the bulk drag : b where is shear stress from bulk flow , g is the elastic modulus of the interfacial layer , and is the probed lengthscale .b varies spatially with the average areal particle density on the air - water interface . here, we calculate b in a region within m of the pinned contact line .we first calculated b at an early time ( t t ) .the shear stress can be calculated from the particle velocity and drop height via , where is shear stress , is viscosity , and is the drop height . at an early time( t t ) pa .about of the surface is covered with ellipsoids .previous experimental studies measured the shear modulus , , of the interfacial monolayer as a function of surface coverage area fraction .we measured the surface coverage area fraction in our system as a function of time .this measurement enabled us to utilize the values of reported in ( n / m ) .the probed lengthscale , , is at most m ( i.e. , the drop diameter ) .thus , at t , b .this calculation is performed at different times during evaporation , until the final stage of evaporation when the aggregate of ellipsoids begins flowing towards the drop center ( fig .[ figs3cr ] a ) .we found that grows linearly with particle velocity , which we observe to increase by a factor of during evaporation .however , grows exponentially with the ellipsoidal area fraction , and area fraction increases by a factor of .thus , the exponential growth of dominates this calculation , and b grows exponentially with time : b . finally , note that for spheres , b , the measured dimensionless boussinesq number clearly demonstrates that clusters of ellipsoids on the air - water interface can resist shear from radially outward fluid flows , and make sense of the fact that these clusters are not pushed to the drop s edge .conversely , clusters of spheres on the air - water interface can not resist shear and are pushed along the air - water interface to the drop s edge where the join the coffee - ring deposit .we have already shown that ellipsoids sit ( largely without moving ) at the air - water interface . herewe utilize confocal microscopy to directly measure the location of ellipsoids ( and spheres ) during evaporation .confocal snapshots are shown in fig .[ figconfocal ] . by integrating the brightness of each pixel over a period of seconds , only particles that are roughly stationary during this time period appear in the images .snapshots are then projected onto a side - view of the drop .the confocal images confirm that ellipsoids sit at the air - water interface ( fig .[ figconfocal ] bottom ) , while spheres are carried all the way to the contact line ( fig .[ figconfocal ] top ) . in order to assess the generality of this effect , we have analyzed three additional types of anisotropic particles .one parameter potentially important for this process is particle hydrophobicity .hydrophilic particles , for example , are perhaps less likely to adsorb onto the air - water interface than hydrophobic particles and might equilibrate differently on the interface than hydrophobic particles ; thus the hydrophilic ellipsoids could have different deposition during drying . to investigate the effect of hydrophilicity , we obtained suspensions of spherical and ellipsoidal polystyrene - pnipmam core - shell particles , i.e. , polystyrene particles coated with pnipmam .we evaporated these suspensions at ; at this temperature , pnipmam is hydrophilic .the core - shell hydrophilic spheres exhibit the coffee ring effect ( fig .[ figs3cr ] b ) .conversely , despite their hydrophilicity , core - shell ellipsoids are deposited uniformly .in fact , these core - shell ellipsoids form the same type of loosely - packed ellipsoid networks on the drop surface as polystyrene ellipsoids absent pnipmam ( fig .[ figs3cr ] b ) .further , we have evaporated suspensions of actin filaments and pf viruses . in each of these suspensions, the contact line depins at very early times . to prevent this early depinning, we add a small amount of nm diameter fluorescent polystyrene spheres ( by weight ) ; these spheres help to pin the contact line until the final stage of evaporation ( t t ) via self - pinning .the spheres in each suspension exhibit the coffee - ring effect .both the actin filaments and pf viruses in suspension , however , are deposited relatively uniformly .( note , the mean major axis length for pf viruses is m ; the mean minor axis length for pf viruses is nm . the mean major axis length for actin filaments is m ; the mean minor axis length for actin filaments is nm .+ ) lastly , we investigate the effects of mixing ellipsoids and spheres . a small number of ellipsoids were added to suspensions of different sized spheres .we then evaporate drops of suspensions containing both ellipsoids and spheres .our initial hope was that a small number of ellipsoids could dramatically change the deposition behavior of spheres in suspension . to simplify this study , we concentrated on two different aspect ratios : spheres ( ) and ellipsoids ( ) .the ellipsoids were stretched from particles of diameter m ; each suspension contains spheres suspended at a volume fraction .evaporative deposits are characterized as a function of ellipsoid volume fraction via ( fig .[ fig4cr ] a ) .suspensions containing smaller spheres with m along with the ellipsoids at volume fractions ranging from to were evaporated .the coffee - ring effect persists for these small spheres , regardless of how many ellipsoids are added to the initial suspension ( fig .[ fig4cr ] a ) .small spheres can easily navigate under or through the loosely packed ellipsoid networks , and thus reach the drop s edge ( fig .[ fig4cr ] b - d ) . for comparison , we evaporated suspensions containing larger spheres with m , along with the same ellipsoids at the same volume fractions utilized previously . for small ellipsoid volume fraction ( ), the evaporating suspensions still exhibit the coffee - ring effect .however , for larger , the coffee ring is diminished ; for sufficiently large , i.e. , , the coffee - ring effect is avoided ( fig .[ fig4cr ] a ) .larger spheres adsorb onto the air - water interface farther from the drop edge than do the smaller ellipsoids .absent ellipsoids , spherical particles form closely - packed aggregates . in the presence of ellipsoids , the spheres instead become entangled in the loosely - packed ellipsoid networks , thus eliminating the coffee ring effect ( fig .[ fig4cr ] b - d ) . therefore, large spherical particles can be deposited uniformly simply by adding ellipsoids .the ability to deposit particles uniformly is desirable in many applications .unfortunately , most proposed methods for avoiding the coffee ring effect require long multistage processes , which can be costly in manufacturing or require use of organic solvents which are sometimes flammable and toxic ( e.g. ) . herewe have shown that by exploiting a particle s shape , a uniform deposit can be easily derived from an evaporating aqueous solution .the results presented here further suggest that other methods of inducing strong capillary interactions , e.g. , surface roughness , may also produce uniform deposits .additionally , open questions about the behavior of ellipsoids in drying drops persist .specifically , one may have thought the drop s edge would quickly saturate with ellipsoids during evaporation , and ellipsoids subsequently arriving would then be deposited in a coffee - ring stain .however , ellipsoids ( and their collective structures ) clearly migrate towards the drop s center during evaporation , in the process creating room for more ellipsoids to adsorb on the air - water interface near the drop s edge .it is unclear why ellipsoids move towards the drop s center .inward fluid flows along the drop s surface push networks of ellipsoids towards the drop s center , thus making room for more ellipsoids to adsorb on the air - water interface .alternatively , the energetic interactions of the ellipsoids on the air - water interface may play an important role in this inward migration .however , a complete understanding of this inward motion has been elusive and will require more experimental and theoretical investigation .the mechanism that produces a uniform coating from particles suspended in drying sessile drops requires the presence of an air - water interface that spans the entire area covered by the drop .a drop confined between two glass plates is a completely different beast . in this case, the air - water interface is only present at the drop edges fig .[ confined]a and fig . [ figs1ce ]c ( a sessile drop is shown for comparison in fig . [confined]b ) .thus , the mechanisms that produce uniform coatings in `` open '' or sessile drops are unlikely to be present in confined drops . to illustrate these spectacular differences , in fig .[ confined ] c , d , we again see that suspended particle shape produces dramatically different depositions .the confined drops do nt even exhibit the conventional coffee ring effect .rather , spheres and slightly stretched spheres are deposited heterogeneously , and anisotropic ellipsoids are distributed relatively more uniformly . in this section ,we show how one can understand these deposition effects .important clues are revealed through consideration of the mechanical properties of the air - water interfaces , and changes thereof as a result of adsorbed particles .recent experiments have explored evaporation of confined drops containing spheres , and their behaviors differ dramatically from sessile drops containing spheres . in the confined case, particles are pushed to the ribbon - like air - fluid interface , and , as evaporation proceeds , the particle - covered air - water interface often deforms and crumples ( fig .[ confined ] e and f ) .the buckling behaviors exhibited by these ribbon - like colloidal monolayer membranes ( cmms ) in confined geometries are strongly dependent on the geometric shape of the adsorbed particles , and the buckling events appear similar to those observed in spherical - shell elastic membranes . before bucklingevents occur , particles are densely packed near the three - phase contact line , regardless of particle shape .further , because the particle volume fraction in the drop is relatively low , these membranes essentially contain a monolayer of particles , i.e. , buckling events occur before multilayer - particle membranes form .these experiments utilize the same micron - sized polystyrene ellipsoids described above in section 2.2 and in .drops of suspension are confined between two glass slides separated by m spacers ( fisher scientific ) and allowed to evaporate ; qualitatively similar results are found for chambers made from slightly hydrophobic cover slips .we primarily study the drops with initial particle volume fraction .( qualitatively similar results are found for volume fractions ranging from to . )the confinement chambers are placed within an optical microscope wherein evaporation is observed at video rates at a variety of different magnifications .this approach also enables measurement of the surface coverage , i.e. , the fraction of the air - water interface coated with particles , prior to buckling events .we find that for spheres and ellipsoids the surface coverage areal packing fraction is .to understand this buckling phenomenon , the elastic properties of the air - water interface with adsorbed particles , i.e. , the elastic properties of the cmms , must be quantified . to this end, the analytical descriptions of elastic membranes are extended to our quasi-2d geometry wherein observations about bending and buckling geometry are unambiguous .this theoretical extension has been described previously ( see and its associated supplemental online material ) , but for completeness and clarity of presentation we discuss it more completely below . following the same procedure as , we first describe the stretching and bending energy associated with membrane buckling events .membrane stretching energy can be written as , where is the total membrane stretching energy , is the 2d young s modulus , is the strain , and the integrand is integrated over the membrane volume . for a thin , linearly elastic material , does not change much in the direction perpendicular to the surface , so , where the integral is calculated over the membrane surface area .the unstretched region has .further , even in the stretched / buckled membrane , most of the deflected region has , since its configuration is identical to the undeflected membrane except that its curvature is inverted ( fig .[ figs1ce ] a , b ) .thus , the only region under strain is the `` rim '' of the deformation ( fig .[ figs1ce ] a , b ) .if the entire membrane had experienced a constant radial displacement of , its radius would change from to , and the circumference would change from to . then the membrane strain would be . on the other hand , if ( as is the case for our samples ) the displacement is confined to a small region subtended by angle , then the in - plane length of this region changes from to , and the total strain in the membrane is .again , this estimate assumes that the interfacial deflection does not change in the -direction ( out - of - plane ) , i.e. , . within these approximations , .the integral is readily performed over an area normal to the glass plates described by , where is the in - plane length of the deflected region , and is the chamber height .thus , .the membrane bending energy can be written as , where is the total bending energy , is the bending rigidity , and is the membrane curvature . here , the curvature is , where is the coordinate in - plane along the membrane ( see fig .[ figs1ce ] a , b ) .the first derivative can be written as , as is the change in the membrane position over a distance of approximately in the direction .the second derivative can then be estimated as , as the first derivative changes from in the undeflected region to in the deflected region of approximate length .therefore , .( this approach again assumes that the second derivative of the deflection in the -direction is small , i.e. , . )the integral is readily performed over an area described by , and .the total energy from the deflection is .this energy is concentrated within the deflected rim ( i.e. , with width ) .membranes will buckle in the way that minimizes their energy . to derive this condition ,we minimize the total deflection energy with respect to , i.e. , . minimizing the total bending and stretching energygives the relation , .thus , by measuring and in a series of drops with the same particles and membrane characteristics , we can experimentally determine .( interestingly , drops out of the calculation , i.e. , a precise determination of is not necessary for this calculation within the assumptions listed above .also , note that this calculation is independent of the depth of the invagination ; the only requirement is that the deflection minimizes total membrane energy .finally , note that this derivation assumes that the interfacial displacement varies little in the -direction , i.e. , the air - water interface deflects the same distance at the top , middle , and bottom of the chamber . ) in practice we measure as the rim full - width located m from the rim vertex ( see fig .[ confined ] g and fig .[ figs1ce ] a , b and d ) .the exact value of , however , is not very sensitive to measurement protocol .for example , defining as the full - width at m or m from the rim vertex changes by approximately percent .this simple experimental approach enables us to extract the ratio of cmm bending rigidity , , to its young s modulus , , from measurements of and across a series of drops from the relation . with all other parameters constant , e.g. , particle anisotropy , particle surface coverage , etc ., this formula predicts that . in fig .[ figs2ce ] b we show results from evaporated drops of particles with anisotropy and with different initial values of , plotting versus .a good linear relationship is observed ( coefficient of determination , r ) , implying that our analysis is self - consistent .similar high quality linear results were found for other values of . in principle , the air - water interface can be distorted in the -direction as well as in - plane .the analysis thus far has assumed these distortions are small , and it is possible to check that these corrections are small . using bright field microscopy , we can identify the inner and outer position of the air - water interface and thus estimate the radius of curvature in the -direction ( fig .[ figs2ce ] a ) .we find that the radius of curvature is approximately equal to the chamber thickness ( m m ) both before and after buckling events .the relevant partial derivatives are then and ; therefore the corrections to theory are indeed small .we extract and plot for evaporating drops of particles with different ( fig .[ figs2ce ] c ) .notice , increases with increasing , implying that as particle shape becomes more anisotropic , increases faster than , i.e. , is larger for ellipsoids ( and ) than for spheres ( ) .since we measure the ratio , in order to isolate the bending rigidity we require knowledge of the young s modulus of the membrane .previous experiments have observed that the cmm young s modulus increases with . for particles with and , we use previously reported values of the bulk modulus , , the shear modulus , , and the relationship in order to extract the cmm young s modulus .we were unable to find data for or , so we linearly interpolated from reported values of and .using these previously reported values , we obtained and n / m for and , respectively .( a - e , respectively ) . f. the area fraction covered by particles after evaporation is complete , _ f _ , for suspensions of particles as a function of their aspect ratio . ]( a - e , respectively ) . f. the area fraction covered by particles after evaporation is complete , _ f _ , for suspensions of particles as a function of their aspect ratio . ] utilizing these previously reported measurements and calculations of we are able to plot versus ( fig .[ figs2ce ] d ) .the best power - law fit finds that .interestingly , this observation is consistent with theoretical models which predict .however , the full physical origin of this connection is unclear .further , while at first glance it may seem contradictory to claim that and , these formulae are consistent .a simple elastic model assumes that and , where is the 3d young s modulus and is the membrane thickness .based on this model , , so .thus , . to test this prediction , we plot versus ( fig . [ figs3ce ] ) .the best power law fit is , implying that these two seemingly contradictory equations are in fact consistent .note , this simple elastic model suggests that kpa for all , which is similar to stiff jello .finally , our estimates of cmm bending rigidity are given in ( fig .[ figs2ce ] e ) . clearly , membrane bending becomes much more energetically costly with increasing particle shape anisotropy .finally , we turn our attention to the problem we initially hoped to understand : the consequences of increased bending rigidity on particle deposition during evaporation processes in confined geometries . as should be evident from our discussion in sections 1 - 3 , substantial efforthas now yielded understanding of the so - called coffee - ring effect and some ability to control particle deposition from sessile drops .much less is known , however , about particle deposition in confined geometries , despite the fact that many real systems and applications feature evaporation in geometries wherein the air - water interface is present only at the system edges .recent experiments have explored evaporation of confined drops containing spheres , and their behaviors differ dramatically from sessile drops containing spheres . in the confined case ,as noted previously , particles are pushed to the ribbon - like air - fluid interface , and , as evaporation proceeds , the particle - covered air - water interface often undergoes the buckling events which we have quantified in sections 4.1 and 4.2 .we find that deposition depends dramatically on suspended particle shape .the final deposition of particles is shown for , in fig .[ fig2ce ] a - e , respectively .spheres and slightly stretched spheres are deposited unevenly , while anisotropic ellipsoids are distributed much more homogeneously . to quantitatively describe the final deposition of particles , we plot the fraction of initial droplet area covered by deposited particles after evaporations , _ f _ ( as introduced in ) , as a function of particle anisotropy ( fig .[ fig2ce ] f ) .specifically , we divide the area into a grid of ( x ) squares .a region is considered to be covered if its area fraction within the square is greater than .( note , for uniformly deposited particles , the area fraction ( based on the initial volume fraction , initial volume , chamber height , and particle size ) would be .thus , the threshold we utilize is of this uniformly deposited area fraction ) .the number of covered regions is then normalized by the total number of squares in the grid , thus producing _ f_. the fraction of area covered with particles is observed to increase with . for and ,_ f _ increases modestly . for ,the deposition is very uniform , and for , virtually the entire area is covered uniformly .the mechanisms that produce the uneven deposition of spheres and slightly stretched particles and the uniform deposition of ellipsoids are revealed by high magnification images ( fig .[ fig3ce ] a - e ) .colloidal particles locally pin the contact line and thereby locally prevent its motion .so - called self - pinning of the air - water interface can occur even in very dilute suspensions , i.e. , . as evaporation continues in suspensions of spheres or slightly anisotropic particles , the cmm interface bends around the pinning site ( fig .[ fig3ce ] a - c ) . then , either it pinches off , leaving particles behind , or it remains connected to the pinned site , leading to fluid flow into the narrow channel that has formed .the latter flow carries particles towards the pinning site ( fig .[ fig3ce ] b and c ) , thus producing streaks of deposited particles ( see fig .[ fig3ce ] a - c ) .temporal and spatial variations along the interface due to these described effects lead to heterogeneous deposition of spherical particles during evaporation .conversely , when ellipsoids adsorb onto the air - water interface ( forming ribbon - like cmms , see fig . [ fig3ce ]d ) , they create an elastic membrane with a high bending rigidity .the bending rigidity of ellipsoid - populated cmms can be approximately two - orders of magnitude larger than sphere - populated cmms ( see fig . [ figs2ce ] ) .thus , while ellipsoids may also pin the contact line , bending of the cmm interface around a pinned contact line is energetically costly .microscopically , bending requires the energetically expensive rearrangement of ellipsoids aggregated on the cmm ; attractive particle - particle capillary interactions on the air - water interface must be overcome for bending , even at very small .conversely , bending a sphere coated cmm costs relatively little energy , as sphere - sphere capillary interactions on the interface are relatively weak .thus , as the confined drop continues to evaporate , the ellipsoid coated cmm does not bend .it recedes radially , depositing ellipsoids near the contact line during this drying process .as we already demonstrated mixing spheres and ellipsoids in sessile drops presents qualitatively new scenarios .it is natural to investigate the deposition of mixtures of spheres and ellipsoids in confined geometries . to thisend suspensions of nm spheres ( ) with were combined with suspensions containing micron - sized ellipsoids ( ) at lower volume fractions , to .the resulting colloidal drops were evaporated in the same confined geometries already utilized .the addition of a very small number of ellipsoids has no effect on the deposition of spheres ( ) .however , the addition of a larger , but still small number of ellipsoids produces a uniform deposition of both ellipsoids and spheres , i.e. , , despite the fact that spheres significantly outnumber ellipsoids ( - ) ( fig .[ fig3ce ] e ) .again , the high bending modulus produced by ellipsoids on the cmm helps explain the observations . both spheres and ellipsoids attach to the air - water interface .ellipsoids deform the air - water interface , creating an effective elastic membrane with a high bending rigidity .when enough ellipsoids are present , pinning and bending the interface becomes energetically costly and the spheres ( and ellipsoids ) are deposited as the interface recedes .further , this behavior in confined geometries is different than that of sessile drops ( see section 3.8 and ) .from this perspective , it is somewhat surprising that small spheres are deposited uniformly from droplets doped with small numbers of ellipsoids and confined between glass plates .interestingly , this method of producing a uniform deposition is similar to convective assembly techniques wherein the substrate , or a blade over the substrate , is pulled away from the contact line in a colloidal suspension ; a thin film is thus formed that leads to the creation of a monolayer of particles ( e.g. , ) . unlike many other convective assembly techniques , the present experimental system has neither moving nor mechanical parts .uniform coatings are created essentially as a result of shape - induced capillary attractions which produce cmms that are hard to bend .colloidal drops evaporating in confined geometries behave quite differently the evaporating sessile drops .ellipsoids adsorbed on the air - water interface create an effective elastic membrane , and , as particle anisotropy aspect ratio increases , the membrane s bending rigidity increases faster than its young modulus . as a result ,when a drop of a colloidal suspension evaporates in a confined geometry , the different interfacial elastic properties produce particle depositions that are highly dependent on particle shape .the ability to increase cmm bending rigidity by increasing particle shape anisotropy holds potentially important consequences for applications of cmms . for example , increased bending rigidity may help stabilize interfaces ( e.g. , pickering emulsions ) and thus could be useful for many industrial applications , e.g. , food processing . in a different vein , the observations presented here suggest the buckling behavior of cmms in confined geometries may be a convenient model system to investigate buckling processes relevant for other systems , e.g. , polymeric membranes , biological membranes , and nanoparticle membranes .in the previous sections , we showed how particle shape influences the behaviors of drying drops containing colloidal particles . for sessile drops we found that particle anisotropy could be employed to overcome the coffee ring effect ; for drops in confinement, we found that particle anisotropy dramatically affected the bending rigidity of the air - water interfaces which in turn modified particle deposition during drying . besides particle shape, many other ideas have been observed , developed , and utilized over the years to manipulate the drying behaviors of colloidal drops . in the final section of this review paper ,we describe our foray into the effects of added surfactants .a surfactant is a surface - active molecule that consists of a hydrophobic and a hydrophilic part . in water ,surfactant molecules populate the air - water interface with their hydrophobic parts `` sticking out of the water , '' thereby reducing the water s surface tension ( which is paramount for the cleaning effects of soaps or dish washers ) . in an immiscible mixture of water and oil ,surfactants populate the interfaces between components , thus stabilizing the emulsion . in an evaporating drop of an aqueous colloidal suspension , surfactants give rise to other effects .herein we describe video microscopy experiments which investigate how a small ionic surfactant ( mostly ) affects particle deposition in drying drops ; these surfactants induce a concentration - driven marangoni flow on the air - water interface and a strong `` eddy''-like flow in the bulk that prevents particles from depositing in the coffee ring and thus suppresses the coffee ring effect for spheres .although we focus here primarily on small ionic surfactants , we have explored the effects of a variety of surfactants . in general ,common types are small , ionic surfactants , e.g. , sodium dodecyl sulfate ( sds ) , or large , polymeric ones , e.g. , pluronics ; the chemical structures of both examples are shown in fig .[ fig : structures ] .accordingly , surfactants can affect deposition phenomena in a variety of ways .for example , it was found that sds can change the deposition patterns from aqueous colloidal drops . in different experiments, surfactant is sprayed onto the drop , leading to complex patterns as a result of thermodynamic transitions between different phases formed by the surfactant . if the surface tension is heterogeneous on a liquid surface ( e.g. , the air - water interface of a drop ) a flow is induced from regions of low to high surface tension .this effect is the so - called marangoni flow .such marangoni flows can result from different temperatures at drop edge and center , e.g. , because of different evaporation rates and slow diffusive heat transfer ; thus , in principle such a flow should be present at the air - water interfaces of drying liquid drops .indeed , marangoni radial flows towards the center of a drop have been found in small drops of octane . in water , however , such temperature - dependent marangoni flows are suppressed .in addition to temperature - driven changes of the surface tension , surfactant - driven marangoni flows have been suggested to explain the relatively uniform deposition of dissolved polymer from droplets of organic solvent containing surfactant .when the local concentration of surfactant molecules at the pinned contact line increases due to the coffee - ring effect , then the surface tension of the drop decreases locally , and a gradient in surface tension arises .this gradient has been suggested as the source of continuous marangoni flow towards the center of the drop .herein , we first investigate the mechanism of a small ionic surfactant , sds , on the evaporation of aqueous colloidal systems and their resulting particle coatings .the experiments demonstrate that such small ionic surfactants do indeed produce marangoni flows in colloidal droplets , not only in agreement with the model suggested for polymer solutions , but also providing a first direct visualization .we further demonstrate how the `` marangoni eddy '' can lead to uniform particle deposition during drying , thereby undermining the coffee ring effect . at the end of this section on surfactantswe show preliminary experiments which demonstrate that large polymeric surfactants like pluronic f-127 influence the evaporation of drops in a strikingly different way than small ionic surfactants . in this case ,contact line pinning is prevented , leading to a uniform particle deposition .we suggest an explanation of this behavior as due mainly to an increase of viscosity near the contact line , which is a result of high polymer concentration because the dissolved polymeric surfactant is transported to the contact line by the coffee - ring flow . the procedure for these experiments has been described previously , but for completeness and presentation clarity we briefly discuss these methodologies below .we focus on a few representative systems of evaporating drops with the small ionic surfactant sds or the large polymeric surfactant pluronic f-127 , respectively , and we attempt to elucidate rules governing their behavior .we employed aqueous suspensions of colloidal polystyrene ( ps ) particles ( diameter nm , synthesized by surfactant free radical emulsion polymerization , and stabilized by sulfate groups ) .suspensions were prepared with deionized water , filtered by a millipore column , and then the suspensions of ps spheres and sds ( sigma - aldrich ) or pluronic f-127 ( basf ) ( in different compositions ) were thoroughly mixed by a vortexer and ultrasonicated for five minutes .evaporation experiments were observed using a brightfield microscope with air objectives ( magnification 5x to 100x ) .clean hydrophilic glass substrates ( fisher scientific ) were used as evaporation substrates .( note : qualitatively similar results were found on hydrophobic cover slips . )the drop volume was about 0.05l , leading to deposition coatings with diameters of 1 to 3 mm .the evaporation process was recorded by a video microscopy ( camera resolution 658x494 pixel , 60 frames per second ) with total evaporation times between 2 and 4 minutes .all experiments were repeated several times in order to identify a consistent concentration - dependent behavior .photographs of the entire deposit , obtained after evaporation , were taken by combining up to four high - resolution photographs when the deposition area was larger than the microscope field of view .[ fig : sds1]a - d shows top views of the deposition pattern of an aqueous suspension of ps spheres ( 0.5 wt% ) ( * a * ) and similar suspensions but with different concentrations of sds ranging from 0.05 wt% to 1.0 wt% ( * b - d * ) . the coffee - ring effect is observed in sample * a * , i.e. , the vast majority of spheres are deposited in a thin ring located at the initial pinned contact line , and very few particles are deposited in the center of the drop .the deposition changes slightly upon adding a small amount of sds ( 0.05 wt% , * b * ) .specifically , the coffee - ring broadens and more particles are deposited in the center of the drop . at higher sds concentrations ( 0.5 wt% ( *c * ) and 1.0 wt% ( * d * ) ) , however , the deposition pattern changes drastically . instead of a single ring at the initial pinned contact line , tree - ring like structuresare observed with several distinct deposition lines .these tree - ring deposition structures can be explained by stick - slip dynamics of the drop s contact line along a large part of the perimeter ; after the edge depins and the drop shrinks , a new contact line stabilizes very quickly via self - pinning by other ps particles in suspension .mutiple depinning and repinning events produce the observed pattern . inside these tree ring - like structures , both systems exhibit relatively uniform depositions of spheres about their centers , surrounded by dark `` flares '' .clearly , the addition of surfactant has large influence on how the particles are deposited . but how can the effects we observe be explained ? to answer this question , we studied the temporal evolution of drops by video microscopy during evaporation .[ fig : sds2 ] shows snapshots of the drying drop in fig .[ fig : sds1]c ( 0.5 wt% sds ) at three different stages : * d * at the begining of evaporation , * e * after about 50% of the total evaporation duration , , has passed , and * f * after evaporation is complete .high magnification images taken from a similar drop with identical composition are shown as well .the first thing we noticed is that when the evaporation starts , drop behavior appears identical to that of drops without sds , i.e. , the contact line is pinned and spheres initially pack densely at the drop s edge arrange in a densely packed structure at the edge ( see fig . [fig : sds2]e inset ) .the image gets progressively darker towards the drop center where the drop is thickest as particles in the bulk are evenly distributed .however , even at this early stage of evaporation , some spheres flow towards the drop edge but do not reach it .rather , as they approach the edge , they are repelled back towards the drop center .as we know from the early studies of the coffee ring effect , with advancing evaporation time , the flow towards the edge becomes stronger . in our experiment ,more and more particles approach the edge but do not reach it .these particles appear to be captured within a certain region of the drop , which is highlighted yellow in fig .[ fig : sds2]f .they form a broad corona , i.e. , an outer rim distinctly different and separated from the inner part of the drop , located between the relatively uniform dark center and the coffee - ring .we describe the dark part of the corona as a `` marangoni eddy '' or circulating region of ps spheres that are transported towards and away from the drop edge throughout the drying process ( see fig .[ fig : sds2]f ) .ps spheres are pulled into the eddy ( see fig . [fig : sds2]f ) , leading to a locally reduced number of particles in the depletion zone ( that explains why this region is less dark then the other regions ) .the trajectory of an individual sphere is marked by the three numbers ( 1 , 2 , 3 ) in fig .[ fig : sds2]f .initially , the sphere is approximately in the middle of the eddy ( 1 ) .after seconds , the sphere is pushed radially outward , i.e. , towards the coffee ring ( 2 ) . however , after another seconds , the sphere is pushed radially inward , i.e. , towards the region between the eddy and the depletion zone ( 3 ) .video microscopy shows us that the same behavior is observed for virtually all of the particles at later times during the evaporation ( see supporting online material for ) the experiments provide evidence that the observed deposition behavior is dominated by a surfactant - driven marangoni effect . as noted above , related surfactant - driven phenomenawere recently observed in drying polymer solutions containing oligomeric fluorine - based surfactants .however , the previously studied polymer solutions differ qualitatively from the aqueous colloidal suspensions presented here .drying polymer solutions can exhibit gelation .further , the local surface tension in drying drops of polymer solutions depends on the local solute concentration .observing particle motion in real - time facilitates a comprehensive understanding of this phenomenon .a cartoon of the mechanism is shown in [ fig : sds2]g .the `` eddy '' forms in between the yellow bars in fig . [ fig : sds2]g , which corresponds to the highlighted region in fig . [ fig : sds2]f in ( top view ) . shortly after a drop is created , some surfactant molecules ( pictured in the cartoon as hydrophilic `` heads '' with hydrophobic `` tails '' ) adsorb on the water / air interface .note , the air - water interface is the energetically preferred location for the amphiphilic sds molecules .however , electrostatic repulsion of anionic heads prevents them from forming maximally dense steric equilibrium packing .additionally , at sufficiently high concentration sds molecules are also dissolved in the bulk , either freely or as micelles ( see fig .[ fig : sds2]d ) . as was the case with no surfactantis added , the contact line is initially pinned .thus , the outward convective flow that is responsible for the coffee - ring effect is present and transports spheres and sds molecules to the drop edge . as a result, the air - water interface near the contact line becomes more concentrated with sds molecules which locally decrease the interfacial surface tension .this creates a surface tension gradient along the air - water interface , , which is resolved by a marangoni flow from low to high .this strong surface flow penetrates into the bulk fluid , so that it can carry colloidal spheres near , but not on the interface , towards the drop center .as spheres flow towards the drop center , the local sds concentration decreases ( and the local increases ) , and the marangoni flow weakens .eventually , the radially outward bulk convective flow that drives the coffee - ring effect dominates the radially inward marangoni flow .particles that travel to this point are carried towards the drop s edge once again ; the process then repeats .although the sds molecules are too small to be observed optically , sds molecules likely participate in the eddy as well .otherwise , the surface would become saturated with sds , which , in turn , would end or at least weaken the marangoni flow .thus , particles are trapped in a circulating flow driven by the local surfactant - concentration , which we call the `` marangoni eddy '' .we find that the marangoni flows become stronger at sds concentrations above its critical micelle concentration ( cmc ) in water ( 8.3 mm.2 wt% ) .again , as was the case for drops without sds , at late times the contact line is observed to depin . however , due to the presence of the marangoni eddy , when the final depinning occurs , many particles are left in the more central regions of the bulk , because the eddy prevented them from attaching to the edge . after the contact line depins ,the particles that remain in the bulk are deposited onto the substrate as the radially - inward - moving contact line passes them .the radially - inward contact line thus leaves behind a relatively uniform particle deposition in the drop center .the formation of a `` marangoni eddy '' is a prerequisite for the relatively uniform particle deposition in the drop center ; it prevents many particles from depositing at the drop s edge , and thereby delays their deposition until times when the contact line has depinned .= 1330 nm ) and different concentrations of pluronic f-127 ( indicated under the pictures ) .the drops were evaporated on a hydrophilic microscope slide , the initial drop volume was about 0.05 l , and the resulting deposition area is between 1 - 2 mm in diameter . * e - h * three sets of snapshots of evaporating water drops containing 1.0 wt% ps particles ( =1330 nm ) and 2 wt% pluronic f-127 at different states of the evaporation.,scaledwidth=90.0% ]lastly , we describe preliminary experiments with non - ionic triblock polymer surfactants such as pluronic f-127 is present ( , , cf .[ fig : structures ] ) . fig .[ fig : pluronic1 ] is analogous to fig .[ fig : sds1 ] , but with 1 wt% ps particles and 0.5 , 1.0 , and 2.0 wt% pluronic . .pluronic f-127 is a relatively large molecule ( g / mol ) ; in the investigated drops , the amount of pluronic is about the same ( by weight ) as colloidal spheres .the addition of pluronic leads to a systematic change in the deposition pattern ; as more surfactant is added , the initial ring becomes broader , and eventually the entire area is coated uniformly ( on a macroscopic scale ) with ps spheres .additionally , at lower concentrations of pluronic , complex deposition patterns appear in the center of the drop area .video microscopy indicates that a marangoni eddy is not present during drying . instead, pluronic induces an early depinning of the contact line and a loose packing of spheres why does nt a surfactant like pluronic f-127 produce a `` marangoni eddy ? '' like sds , the pluronic f-127 molecule is amphiphilic and the coffee ring effect should also transport it to the edge were it could , in principle , give rise to the same flow effects as sds .cui et al .attribute similar behaviors found in samples containing poly(ethylene oxide ) to a combination of several effects .their most important argument is that dissolved polymer is transported to the drop edge where it leads to a dramatic increase of viscosity such that the suspended colloidal particles are immobilized before they reach the contact line. this argument does not provide insight about why the contact line should move , but we speculate that the hydrophobicity of the deposited polymer ( in our case , surfactant ) may play a crucial role .interestingly , some other surfactants in the pluronic family , principally the same structure ( cf . fig . [ fig : structures ] but with different block lengths , show a strong marangoni eddy . specifically , different pluronics ( all basf ) were explored and their deposition patterns are shown in fig .[ fig : pluronic6 ] ; these surfactants include pluronic f-68 prill ( , , kda ) , p-85 ( , , kda ) , and p-123 ( , , kda ) .for all three pluronics , fig .[ fig : pluronic6 ] shows a snapshot of the evaporating drop at on the left and a dark - field microscopy photograph of the deposition after drying is completed .interestingly , for f-68 ( a ) and p-85 ( b ) , a marangoni eddy similar to that seen for sds appears .correspondingly , the deposition pattern of drops with these surfactants is more similar to the case of sds than to pluronic f-127 .on the other hand , pluronic p-123 ( c ) leads to the same phenomenon as f-127 , revealing a mostly uniform , loose deposition with no evidence of a marangoni eddy .a comparison of the molecular properties of all poloxamer surfactants shows that neither the total molecular weight nor the ratio is the parameter that governs the drop evaporation . in total, the examples in this review illustrate the complexity of the coffee ring problem . in a few carefully controlled situationsis the deposition of particles from a drying drop is dominated by a single effect . in most cases , several cooperative or antagonistic effects act at the same time , preventing easy predictions and phenomenological understanding of the underlying principles needed to open new pathways towards further technological application of the coffee ring effect or its circumvention .we thank matthew gratale , matthew a. lohr , kate stebe , and tom c. lubensky for helpful discussions .we also gratefully acknowledge financial support from the national science foundation through dmr-0804881 , the penn mrsec dmr11 - 20901 , and nasa nnx08ao0 g . | we explore the influence of particle shape on the behavior of evaporating drops . a first set of experiments discovered that particle shape modifies particle deposition after drying . for sessile drops , spheres are deposited in a ring - like stain , while ellipsoids are deposited uniformly . experiments elucidate the kinetics of ellipsoids and spheres at the drop s edge . a second set of experiments examined evaporating drops confined between glass plates . in this case , colloidal particles coat the ribbon - like air - water interface , forming colloidal monolayer membranes ( cmms ) . as particle anisotropy increases , cmm bending rigidity was found to increase , which in turn introduces a new mechanism that produces a uniform deposition of ellipsoids and a heterogeneous deposition of spheres after drying . a final set of experiments investigates the effect of surfactants in evaporating drops . the radially outward flow that pushes particles to the drop s edge also pushes surfactants to the drop s edge , which leads to a radially inward flow on the drop surface . the presence of radially outward flows in the bulk fluid and radially inward flows at the drop surface creates a marangoni eddy , among other effects , which also modifies deposition after drying . [ 1999/12/01 v1.4c il nuovo cimento ] |
a _ market _ is a set of arrangements by which buyers and sellers , collectively known as _ traders _ , are in contact to exchange goods or services ._ auctions _ , a subclass of markets with strict regulations governing the information available to traders in the market and the possible actions they can take , have been widely used in solving real - world optimization problems , and in structuring stock or futures exchanges .the most common kind of auction is the _ english auction _ , in which there is a single seller , and multiple buyers compete by making increasing bids for the commodity ( good or service ) being auctioned ; the one who offers the highest price wins the right to purchase the commodity . since only one type of trader buyers makes offers in an english auction , the auction belongs to the class of _ single - sided auctions_. another common single - sided auction is the _ dutch auction _ , in which the auctioneer initially calls out a high price and then gradually lowers it until one bidder indicates they will accept that price .another class of single - sided auctions is the class of _ sealed - bid auctions _ , in which all buyers submit a single bid and do so simultaneously , i.e. , without observing the bids of the others or if the others have bid .two common sealed - bid auctions are the _ first - price auction _ and the _ second - price auction _ or _ vickrey auction _ . in both types of sealed - bid auctions ,the highest bidder obtains the commodity . in the former, the highest bidder pays the price they bid , while in the latter , they pay the second highest price that was bid . these four single - sided auctions english , dutch , first - price sealed - bid , and vickrey are commonly referred to as the _ standard auctions _ and were the basis of much early research on auctions .in addition , there are _ double - sided auctions _ or s , in which both sellers and buyers make offers , or _shouts_. the two most common forms of da are _ clearing houses _ or chs and _ continuous double auctions _ or cdas . in a ch , an auctioneer first collects _bids_shouts from buyers and _ asks_shouts from sellers , and then clears the market at a price where the quantity of the commodity supplied equals the quantity demanded . this type of market clearing guarantees that if a given trader is involved in a transaction , all traders with more competitive offers are also involved . in a cda ,a trader can make a shout and accept an offer from someone at any time .this design makes a cda able to process many transactions in a short time , but permits extra - marginal traders to make deals .both kinds of are of practical importance , with , for example , variants being widely used in real - world stock or trading markets including the new york stock exchange ( ) and the chicago mercantile exchange ( ) . in some auctions, traders can place shouts on combinations of items , or packages " , rather than just individual items .they are called _combinatorial auctions_. a common procedure in these markets is to auction the individual items and then at the end to accept bids for packages of items .combinatorial auctions present a host of new challenges as compared to traditional auctions , including the so - called _ winner determination problem _, which is how to efficiently determine the allocation once the bids have been submitted to the auctioneer .traders , in some cases , are allowed to both sell and buy during an auction .such traders are called _ two - way traders _ , while those that only buy or only sell are called _ one - way traders_. this report will mainly discuss non - combinatorial s , especially s , populated by one - way traders .a central concern in studies of auction mechanisms are the supply and demand schedules in a market .the quantity of a commodity that buyers are prepared to purchase at each possible price is referred to as the _ demand _ , and the quantity of a commodity that sellers are prepared to sell at each possible price is referred to as the _supply_. thus if is plotted as a function of , the _ demand curve _ slopes downward and the _ supply curve _ slopes upward , as shown in figure [ fig : underlying - supply - demand ] , since the greater the price of a commodity , the more sellers are inclined to sell and the fewer buyers are willing to buy .typically , there is some price at which the quantity demanded is equal to the quantity supplied . graphically , this is the intersection of the demand and supply curves .the price is called the _ equilibrium price _ , and the corresponding quantity of commodity that is traded is called the _equilibrium quantity_. the equilibrium price and equilibrium quantity are denoted as and respectively in figure [ fig : underlying - supply - demand ] .each trader in an auction presumably has a limit price , called its _ private value _ , below which sellers will not sell and above which buyers will not buy .the private values of traders are not publicly known in most practical scenarios .what is known instead are the prices that traders offer .self - interested sellers will presumably offer higher prices than their private values to make a profit and self - interested buyers tend to offer lower prices than their private values to save money .the prices and quantities that are offered also make a set of supply and demand curves , called the _ apparent supply and demand curves _, while the curves based on traders private values are called the _ underlying supply and demand_. figure [ fig : apparent - supply - demand ] shows that the apparent supply curve shifts up compared to the underlying supply curve in figure [ fig : underlying - supply - demand ] , while the apparent demand curve shifts down .when traders are excessively greedy , the apparent supply and demand curves do not intersect and thus no transactions can be made between sellers and buyers unless they compromise on their profit levels and adjust their offered prices . in a , buyers and sellers not only ` haggle ' on prices in a collective manner , but they also face competition from opponents on the same side of the market . thus buyers , for example , are not only collectively trying to drive prices down , against the wishes of sellers , but they are also individually trying to ensure that they , rather than other buyers , make profitable trades .this leads to shouts becoming more and more competitive over time in a given market .figure [ fig : time - series ] shows a typical time series of shouts in a .ask prices usually start high while bid prices start low .gradually , traders adjust their offered prices , or make new shouts , closing the gap between standing asks and bids until the price of a bid surpasses that of an ask .such an overlap results in a transaction , shown as a solid bar between the matched ask and bid in figure [ fig : time - series ] . in the marketdepicted in figure [ fig : time - series ] , newly placed bids ( asks ) do not have to beat the outstanding bids ( asks ) .however in some variants of the including the market operated by the , new shouts must improve on existing ones .this requirement is commonly referred to as the _ shout improvement rule _ . in some real - world stock markets , including the and the markets , trades are made through _ specialists _ or _ market makers _ , who buy or sell stock from their own inventory to keep the market liquid or to prevent rapid price changes .each specialist is required to publish on a regular and continuous basis both a _ bid quote _ , the highest price it will pay a trader to purchase securities , and an _ ask quote _ , the lowest price it will accept from a trader to sell securities . the specialist is obligated to stand ready to buy at the bid quote or sell at the ask quote up to a certain number of shares .the range between the lower bid quote and the higher ask quote is called the _ bid - ask spread _ , which , according to stock exchange regulations , must be suitably small . if buy orders temporarily outpace sell orders , or conversely if sell orders outpace buy orders , the specialist is required to use its own capital to minimize the imbalance .this is done by buying or selling against the trend of the market until a price is reached at which public supply and demand are once again in balance .maintaining a bid - ask spread creates risk for a specialist , but when well maintained , also brings huge profits , especially in an active market .markets involving specialists that present quotes are called _ quote - driven markets_. another class of markets are _ order - driven markets _ , in which all of the orders of buyers and sellers are displayed .this contrasts with quote - driven markets where only the orders of market makers are shown .an example of an order - driven market is the market formed by _ electronic communication networks _ or s. these are electronic systems connecting individual traders so that they can trade directly between themselves without having to go through a middleman like a market maker .the biggest advantage of this market type is its transparency .the drawback is that in an order - driven market , there is no guarantee of order execution , meaning that a trader has no guarantee of making a trade at a given price , while it is guaranteed in a quote - driven market .there are markets that combine attributes from quote- and order - driven markets to form hybrid systems. our discussion above may give the impression that in real markets trade orders are made directly by the individuals who want to buy or sell stock . in practice , traders commonly place orders through brokerage firms , which then manage the process of executing the orders through a market .auctions with different rules and populated by different sets of traders may vary greatly in performance .popular performance measurements include , but are not limited to , _ allocative efficiency _ and the _ coefficient of convergence_. the allocative efficiency of an auction , denoted as , is used to measure how much social welfare is obtained through the auction .the _ actual overall profit _ , , of an auction is : where is the transaction price of a trade completed by agent and is the private value of agent , where ranges over all agents who trade .the _ theoretical _ or _ equilibrium profit _, , of an auction is : for all agents whose private value is no less competitive than the equilibrium price , where is the equilibrium price .given these : is thus a measure of the proportion of the theoretical profit that is achieved in practice .the convergence coefficient , denoted as , was introduced by smith to measure how far an active auction is away from the equilibrium point .it actually measures the relative deviation of transaction prices from the equilibrium price : since markets with human traders often trade close to the equilibrium price , is used as a way of telling how closely artificial traders approach human trading performance .research on auctions originally interested mathematical economists .they view auctions as games and have successfully applied traditional analytic methods from game theory .this section therefore takes an overlook at basic concepts in game theory .the games studied by game theory are well - defined mathematical objects .a game is usually represented in its _ normal form _ , or _ strategic form _ , which is a tuple is the number of players , is the set of _ actions _ available to player , and is the _ payoff _ or _ utility function _ , where is the joint action space .when a player needs to act , it may follow a _ pure strategy _, choosing an action , , from its action set , or a _ mixed strategy _ , , choosing actions according to a probability distribution .the strategy set of player , denoted as , is the same thing as a set of probability distributions over , denoted as .a joint strategy for all players is called a _ strategy profile_ , denoted as , and is the probability all players choose the joint action from .thus player s payoff for the strategy profile is : in addition , denotes the set of all possible strategy profiles , . ] is a strategy profile for all players except , and is the strategy profile where player uses strategy and the others use .a normal - form game is typically illustrated as a matrix with each dimension listing the choices of one player and each cell containing the payoffs of players for the corresponding joint action .figure [ fig : normal - form - prisoner - dilemma ] shows the normal form of the well - known prisoner s dilemma game .alternatively , games may be represented in _ extensive form _ , which is a tree , as in figure [ fig : extensive - form - game ] .the tree starts with an initial node and each node represents a state during play . at each non - terminal node, a given player has the choice of action .different choices lead to different child nodes , until a terminal node is reached where the game is complete and the payoffs to players are given . = [ level distance=2 cm , sibling distance=2 cm ] = [ level distance=2 cm , sibling distance=1 cm ] = [ fill , circle , minimum size=1pt ] = [ right=5pt ] = [ text centered ] child node[tree ] child node[tree ] node[terminal ] edge from parent node[annotation , below ] child node[tree ] node[terminal ] edge from parent node[annotation , above ] edge from parent node[annotation , below ] child node[tree ] child node[tree ] node[terminal ] edge from parent node[annotation , below ] child node[tree ] node[terminal ] edge from parent node[annotation , above ] edge from parent node[annotation , above ] ; a game may be _ cooperative _ or _ noncooperative _ , as players in these games are respectively _ cooperative _ or _ self - interested_. cooperative players share a common payoff function , i.e. , whereas self - interested players typically have distinct payoff functions . in both cases ,players need to coordinate in a certain way to ` assist ' each other in achieving their goals . if the payoffs of all players for each strategy profile sum to zero , the noncooperative game is called a _zero - sum _ game , i.e. , zero - sum games are a special case of a more general class of games called _ constant - sum _ games , where the sum of all payoffs for each outcome is a constant but may not necessarily be zero .non - zero - sum games are sometimes referred to as _ general - sum _ games . in economic situations ,the exchange of commodities is considered general - sum , since both parties gain more through the transaction than if they had not transacted ( otherwise the exchange would not have happened , assuming both are rational ) . in some games , the payoffs for playing a particular strategy remain unchanged as long as the other strategies employed collectively by the players are same , no matter which player takes which action .these games are called _ symmetric games _ and the rest are _asymmetric games_. for example , the prisoner s dilemma game given above is symmetric. players may take actions _simultaneously _ or _sequentially_. in a sequential game , players have alternating turns to take actions and a player has knowledge about what actions the other players have taken previously .simultaneous games are usually represented in normal form , and sequential games are usually represented in extensive form .a sequential game is considered a game of _ perfect information _ if all players know all the actions previously taken by the other players .a similar concept is a game of _ complete information _ , which means all players in the game know the strategies and payoff functions of the other players . in some sense, complete information may be viewed as capturing static information about a game while perfect information addresses dynamic information that becomes available during runs ( or instances ) of the game .there are various solutions to a normal - form game depending upon the properties of the game and preferences over outcomes . a strategy , ,is said to be _ dominant _ if it always results in higher payoffs than any other choice no matter what the opponents do , i.e. , in the example of the prisoner s dilemma game , the choice * defect * dominates * cooperate * for either player , though ironically both will be better off if they choose to cooperate and know that the other will also . in many games there are , however , no dominant strategies . to conservatively guarantee the best worst - case outcomes , a player may play the _ minimax _ strategy , which is where and are respectively a joint action for all players except and the set of joint actions for them . in theory , this can be solved via linear programming , but clearly there are many games that are too large to be solved in practice .another approach to solving the problem is to find the _best response _ strategies to the strategies of the other players . these can be defined as a joint strategy forms a _ nash equilibrium _ or if each individual strategy is the best response to the others strategies .when a is reached , no player can be better off unilaterally , given that the other players stay with their strategies . in the example of the prisoner s dilemma game , * , defect * is a .although nash showed that all finite normal - form games have at least one , nash equilibria are generally difficult to achieve . on the one hand , conitzer and sandholm proved that computing nash equilibria is likely -hard ; on the other hand , some games involve more than one , thus without some extra coordination mechanism , no player knows which equilibrium the others would choose .many papers have been concerned with equilibrium refinements " so as to make one equilibrium more plausible than another , however it seems to lead to overly complicated models that are difficult to solve .a more practical approach is to allow players to learn by playing a game repeatedly .a _ repeated game _ is a game made up from iterations of a single normal - form game , in which a player s strategy depends upon not only the one - time payoffs of different actions but also the history of actions taken by its opponents in preceding rounds .such a game can be viewed as a system with multiple players and a single state , since the game setting does not change across iterations .if the setting changes over time , the game becomes a _stochastic game_. a stochastic game involves multiple states and the player payoff functions relate to both their actions in each interaction and the current state .the goal of a player in such a game is to maximize its long - term return , which is sometimes defined as the average of all one - time payoffs or the discounted sum of those payoffs .brown introduced a learning method , called _ fictitious play _ , for games in which all the other players use stationary strategies . with this method, the player in question keeps a record of how many times the other players have taken each action and uses the frequencies of actions to estimate the probabilities of actions in an opponent s strategy . then the player chooses a best - response strategy based on its belief .if the player s belief converges , what it converges to and its own best - response strategy form a .the method becomes flawed if players adopt non - stationary strategies and in some games the belief simply does not converge .another promising approach is to analyze the situation with evolutionary methods .this approach assumes there is a large population of individuals and each strategy is played by a certain fraction of these individuals .then , given the distribution of strategies , individuals with better average payoffs will be more successful than others , so that their proportion in the population will increase over time .this , in turn , may affect which strategies are better than others .in many cases , the dynamic process will move to an equilibrium .the final result , which of possibly many equilibria the system achieves , will depend on the initial distribution .the evolutionary , population - dynamic view of games is useful because it does not require the assumption that all players are sophisticated and think the others are also rational , an assumption that is often unrealistic . instead , the notion of _ rationality _ is replaced with the much weaker concept of _ reproductive success_. a related concept , considering the overall outcome rather than individual payoffs , is _pareto optimality_. a strategy profile , , is _ pareto optimal _ , or _pareto efficient _ , if there exists no other strategy profile producing higher payoffs for all players , i.e. , in the prisoner s dilemma game , all pure strategy profiles except for * , defect * are pareto optimal .a pareto optimal outcome is highly desirable , but usually difficult to achieve .self - interested players tend to take locally optimal actions that may not collectively be pareto optimal . in the prisoner s dilemma game , * , cooperate * instead of the * , defect * obviously causes both players to be better off . in games of incomplete information , to utilize the concept of , each player needs to maintain an estimate of the others strategies so as to come up with a best - response strategy , where bayes theorem is used to update or revise beliefs following interactions with opponents .the concept of equilibrium therefore becomes _ bayesian nash equilibrium _ , or . that is each players strategy is a function of her own information , and maximizes her expected payoff given other players strategies and given her beliefs about other players information .auctions are a way to enable interactions among traders , and traders make profits as a result of transactions .vickrey pioneered the approach of thinking about a market institution as a game of incomplete information since traders do not know each others private values . in research on single - sided auctions ,the main goal is to find mechanisms that maximize the profit of sellers , who are special players in the auctioning games .while in double - sided auctions , research focuses on maximizing social welfare and identifying how price formation develops dynamically . by assuming a fixed number of symmetric " , risk - neutral bidders , who each want a single unit of goods , have a private value for the object , and bid independently , vickrey showed that the seller can expect equal profits on average from all the standard types of auctionsthis finding is called the _ revenue equivalence theorem_. this theorem provides the foundation for the analysis of _ _ optimal auctions _ _ and much subsequent research can be understood in terms of this theorem .numerous articles have reported how its results are affected by relaxing the assumptions behind it .the assumption that each trader knows the value of the goods being traded , and that these values are all private and independent of each other is commonly called the _ private - value model _ . in some cases , by contrast , the actual value of the goods is the same for everyone , but bidders have different private information about what that value actually is . in these cases ,a bidder will change her estimate of the value if she learns another bidder s estimate , in contrast to the private - value case in which her value would be unaffected by learning any other bidder s preferences or information .this is called the _ pure common - value model _ .the winner in this scenario is the individual who makes the highest estimate of the value , and this tends to be an overestimate of the value .this overestimation is called the _ winner s curse_. if all the bidders have the existence of the winner s curse in mind , the highest bid in first - price auctions tends to be lower than in those second - price auctions , though it still holds that the four standard auctions are revenue - equivalent .a general model encompassing both the private - value model and the pure common - value model as special cases is the _ correlated - value model _ .this assumes that each bidder receives a private information signal , but allows each bidder s value to be a general function of _ all _ the signals .receives signal and would have value if all bidders signals were available to her . in the private - value model a function only of . in the pure common - value model for all and . ]milgrom and weber analyzed auctions in which bidders have _ affiliated information _ , and showed that the most profitable standard auction is then the ascending auction .myerson demonstrated how to derive optimal auctions when the assumption of symmetry fails .maskin and riley considered the case of risk - averse bidders , in which case the first - price sealed - bid auction is the most profitable of the standard auctions . for practical reasons ,it is more important to remove the assumptions that the number of bidders is unaffected by the auction design , and that the bidders necessarily bid independently of each other . according to , sealed - bid designs frequently ( but not always ) both attract a large number of serious bidders and are better at discouraging collusion than english auctions .in contrast with simple single - sided auctions , where the goals of auction mechanism designers reflect the interests of the single seller , double - sided auctions aim to maximize the collective interests of all traders , or in other words , the social welfare , i.e. , the total surplus all traders earn in an auction .numerous publications have reported theoretical assertions or empirical observations of high efficiency in a variety of double - sided auctions , and have discussed what leads to the maximization of social welfare .chatterjee and samuelson made the first attempt to analyze double auctions considering a special case of the involving a single buyer and a single seller . in this auction, the transaction price is set at the midpoint of the interval of market - clearing prices when the interval is non - empty .they found linear bidding strategies which miss potential transactions with probability of 1/6 .satterthwaite and williams analyzed a generalized version of this auction so - called _-double auction _ or -which involves sellers and buyers , and sets the transaction price at other points in the interval of market - clearing prices .they showed that in s the differences between buyers bids and true values are and foregone gains from trade are , so _ _ ex post _ _ inefficiency vanishes reasonably fast as the market gets larger .wilson first studied the generalization of games of incomplete information to s , in particular , s in which each agent can trade at most one indivisible unit and , given the bids and asks , the maximum number of feasible trades are made at a price a fraction of the distance between the lowest and highest feasible market clearing prices .he proposed a strategy for buyers and sellers in which a trader waits for a while before making bids or asks .then the trader conducts a dutch auction until an offer from the other side is acceptable .this strategy produces a nearly _ ex post _ efficient final allocation .wurman _ et al ._ carried out an incentive compatibility analysis on a which is assumed to have bids and asks .they showed that the ( )st - price ( or - lowest - price ) clearing policy is incentive compatible for single - unit buyers under the private - value model , as is the - price ( or ( )st - lowest - price ) auction for sellers . the only way to get incentive compatibility for both buyers and sellers is for some party to subsidize the auction .myerson and satterthwaite showed that there does not exist any bargaining mechanism that is individually rational , efficient , and bayesian incentive compatible for both buyers and sellers without any outside subsidies .as friedman pointed out , though theoretically it is natural to model s as a game of incomplete information , the assumption of prior common knowledge in the incomplete information approach may not hold in continuous auctions or may involve incredible computational complexity .this is because at every moment , a trader needs to compute expected utility - maximizing shouts based on the shout and transaction history of the auction and the length of time the auction has to go . on the other hand, laboratory results have shown that outcomes are quite insensitive to the number of traders beyond a minimal two or three active buyers and two or three active sellers .foregone gains by satterthwaite and williams may suggest that the coefficient of in the actual foregone gain function is small .] moreover , parameter choices , which according to an incomplete information analysis , should greatly reduce efficiency in s had no such effect in recent laboratory tests .due to the difficulty of applying game - theoretic methods to complex auction mechanisms , researchers from economics and computer science have turned to running laboratory experiments , studying the dynamics of price formation and how the surprisingly high efficiency is obtained in a where information is scattered between the traders .the data on which researchers base their studies may come from three sources : ( 1 ) field data from large - scale on - going markets , ( 2 ) laboratory data from small - scale auctions with human subjects , and ( 3 ) computer simulation experiments .field data has the most relevance to the real - world economy , but does not reveal many important values , e.g. , the private values of traders , and hence puts limits on what can be done .the human subjects in laboratory experiments presumably inherit the same level of intelligence and incentive to make profit as in real markets , and the experiments are run under the rules that researchers aim to study . such experiments , however , are expensive in terms of time and money needed .computer - aided simulation is a less expensive alternative and can be repeated as many times as needed .however traders strategies are not endogenously chosen as in auctions with human traders , but are specified exogenously by the experiment designers , which raises the question of whether the conclusions of this approach are trustworthy and applicable to practical situations .gode and sunder invented a zero - intelligence strategy ( or ) that always randomly picks a profitable price to bid or ask .surprisingly , their experiments with s exhibit high efficiency despite the lack of intelligence of the traders .thereafter , much more work followed this path , and has gained tremendous momentum , especially given that real - world stock exchanges are becoming automated , e - business becomes an everyday activity , and the internet reaches every corner of the globe .smith pioneered the research falling into the so - called _ experimental economics _ field by running a series of experiments with human subjects .the experimental results revealed many of the properties of s , which have been the basis and benchmark for much subsequent work .smith showed that in many different cases even a handful of traders can lead to high allocative efficiency , and transaction prices can quickly converge to the theoretical equilibrium .smith s experiments are set up as follows : * every trader , either a buyer or a seller , is given a private value .the set of private values form the supply and demand curves . *each experiment was run over a sequence of trading days , or periods , the length of which depend on how many traders are involved but are typically several minutes in duration .different experiments may have different numbers of periods .* for simplicity , in most experiments , a trader is allowed to make a transaction for the exchange of only a single commodity in each day . *traders are free at any time to make a bid / ask or to accept a bid / ask .* once a transaction occurs , the transaction price , as well as the two traders private values , are recorded . * for each new day , a trader may make up to one transaction with the same private value as before no matter whether she has made one in the previous day . thus the supply and demand curves each correspond to a trading day .the experimental conditions of supply and demand are held constant over several successive trading days in order to give any equilibrating mechanisms an opportunity to establish an equilibrium over time , unless it is the aim to study the effect of changing conditions on market behavior . reports 10 experiments that we discuss below .each experiment was summarized by a diagram showing the series of transactions in the order in which they occurred .figure [ fig : smith - test1 ] gives one of smith s diagrams .are straight line segments because it is assumed there that a large number of traders participate in the auction and thus the step - changes can be treated as infinitesimal . ] . in the right - hand part of the diagram , each tick represents a transaction , rather than a unit of physical time .trading prices in most experiments have a striking tendency to converge on the theoretical prices , marked with a dashed line in figure [ fig : smith - test1 ] .to measure the tendency to converge , smith introduced the coefficient of convergence , , from ( [ equ : alpha ] ) .figure [ fig : smith - test1 ] shows tends to decline from one trading day to the next .the equilibrium price and quantity of experiments 2 and 3 are approximately the same , but the latter , with the steeper inclination of supply and demand curves , converges more slowly .this complies with the walrasian hypothesis that the rate of increase in exchange price is an increasing function of the excess demand at that price .experiment 4 presents an extreme case with a flat supply curve , whose result also confirms the walrasian hypothesis , but it converges to a fairly stable price above the predicted equilibrium . in this experiment ,a decrease in demand is ineffective in shocking the market down to the equilibrium .the result shows that the equilibrium may depend not only on the intersection of the supply and demand schedules , but also upon the shapes of the schedules . a hypothesis aiming to explain the phenomenon is that the actual market equilibrium will be above the equilibrium by an amount which depends upon how large the _ _ buyers rent _ _ , price axis , and the demand curve .] is relative to the _sellers rent_. , price axis , and the supply curve . ]experiment 7 , which was designed with the purpose of supporting or contradicting this hypothesis , shows slow convergence , complying with the walrasian hypothesis , but still exhibits a gradual approach to equilibrium .it is concluded that a still smaller buyers rent may be required to provide any clear downward bias in the static equilibrium .what s more , it seems quite unmistakable " that the bigger the difference between the buyers rent and sellers rent , the slower the convergence .smith speculated that the lack of monetary payoffs to the experimental traders may have an effect on the markets . a strong measure to further test the hypothesis is to mimic real markets as exactly as possible by paying each trader a small return just for making a contract in any period , which according to some experiments induces faster convergence .experiment 5 was designed to study the effect on market behavior of changes in the conditions of demand and supply . at some point in the experiment ,new buyers were introduced resulting in an increase in demand . the eagerness to buy causes the trading price to increase substantially once the market resumes and the price surpasses the previous equilibrium .experiment 6 was designed to determine whether market equilibrium was affected by a marked imbalance between the number of intra - marginal sellers and the number of intra - marginal buyers near the predicted equilibrium price .the result confirmed the effect of a divergence between buyer and seller rent on the approach to equilibrium , but the lack of marginal sellers near the theoretical equilibrium did not prevent the equilibrium from being attained .the change of decrease in demand at the end of the fourth trading day showed that the market responded promptly by showing apparent convergence to the new , lower , equilibrium .in contrast to the previous experiments , the market in experiment 8 was designed to simulate an ordinary retail market , in which only sellers are allowed to enunciate offers , and buyers could only either accept or reject the offers of sellers .due to the desire of sellers to sell at higher prices , the trading prices in the first period remained above the predicted equilibrium . but starting at the second period , the trading price decreased significantly and remained below the equilibrium , not only because the early buyers again refrained from accepting any high price offers , but also because the competition among sellers became more intense . later in the experiment ,when the previous market pricing organization was resumed , exchange prices immediately moved toward equilibrium .experiments 9 and 10 are similar to experiment 7 except that each trader is allowed to make up to 2 transactions with the assigned private value within each day .the results showed that the increase in volume helps to speed up the convergence to equilibrium .the same results were obtained even when demand was increased during experiment 9 .smith s focus in was mainly on the convergence of transaction prices in different scenarios rather than directly examining why high efficiency is obtained .however , high efficiency is usually the goal of a market designer . in a computerized world, a question that arises naturally is whether smith s results can be replicated in electronic auctions . in smiths experiments , as is traditional in real markets , the traders are human beings , but computer programs are supposed to be automatic and work without human involvement .obviously humans are intelligent creatures , but programs are not , at least for the foreseeable future .is it intelligence that contributes to the high efficiency of double auction markets , or is it something else ?gode and sunder were among the first to address this question , claiming that no intelligence is necessary for the goal of achieving high efficiency ; so the outcome is due to the auction mechanism itself .they reached this position having introduced two trading strategies : _ zero intelligence without constraint _ or zi - u and _ zero intelligence with constraint _ or zi - c .zi - u , the more nave version , shouts an offer at a random price without considering whether it is losing money or not , while zi - c , which lacks the motivation of maximizing profit and picks a price in a similar way to zi - u , simply makes shouts that guarantee no loss .it was shown that performs poorly in terms of making a profit , but generates high efficiency solutions , comparable to the human markets ( see table [ table : zi - eff ] ) and can be considered to place a lower bound on the efficiency of markets ..mean efficiency of markets in gode and sunder s experiments .originally as table 2 in . [ cols="<,^,^,^,^,^",options="header " , ] gode and sunder s experiments were setup with similar rules as in smith s .they designed five different supply and demand schedules and tested each of them respectively with the three kinds of homogeneous traders , , , and human traders .figure [ fig : zi - test4 ] presents what happened in one of their experiments .prices in the market exhibit little systematic pattern and no tendency to converge toward any specific level , but on the contrary , prices in the human market , after some initial adjustments , settle in the proximity of the equilibrium price ( indicated by a solid horizontal line in all panels in figure [ fig : zi - test4 ] ) .gode and sunder then raised the question : how much of the difference between the market outcomes with traders and those with human traders is attributable to intelligence and profit motivation , and how much is attributable to market discipline ?they argue that , after examining the performance of the markets , it is market discipline that plays a major role in achieving high efficiency .though in the market , the price series shows no signs of improving from day to day , and the volatility of the price series is greater than the volatility of the price series from the human market , the series converges slowly toward equilibrium within each day .gode and suner s explanation is that it is due to the progressive narrowing of the opportunity sets of traders , e.g. , the set of intra - marginal traders . despite the randomness of ,buyers with higher private values tend to generate higher offered prices and they are likely to trade with sellers earlier than those buyers further down the demand curve .a similar statement also holds for sellers .thus as the auction goes on , the upper end of the demand curve shifts down and the lower end of the supply curve moves up , which means the feasible range of transaction prices narrows as more commodities are traded , and transaction prices will converge to the equilibrium price . the fact that traders lack profit motivation and have only the minimal intelligence ( just enough to avoid losing money ) suggests that the market mechanism is the key to obtaining high efficiency .gode and sunder s results were , however , questioned by cliff and bruten .the latter agreed on the point that the market mechanism plays a major role in achieving high efficiency , but disputed whether in markets transaction prices will always converge on equilibrium price .they argued that the mean or expected value of the transaction price distribution was shown quantitatively to get close to the equilibrium price only in situations where the magnitude of the gradient of linear supply and demand curves is roughly equal , and used this to infer that zero - intelligence traders are not sufficient to account for convergence to equilibrium .cliff and bruten further designed an adaptive trading strategy called _ zero intelligence plus _ or zip .like , traders make stochastic bids , but can adjust their prices based on the auction history , i.e. , rasing or lowering their profit margins dynamically according to the actions of other traders in the market . more specifically ,traders raise the profit margin when a less competitive offer from the competition is accepted , and lower the profit margin when a more competitive offer from the competition is rejected , or an accepted offer from the other side of the market would have been rejected by the subject . at every step ,the profit margin is updated according to a learning algorithm called the _ widrow - hoff delta rule _ in which a value being learned is adapted gradually towards a moving target , and the past targets leave discounting momentum to some extent .cliff and bruten concluded that the performance of zip traders in the experimental markets is significantly closer to that of human traders than is the performance of traders , based on the observation that traders rapidly adapt to give profit dispersion ] where and are the actual and theoretical equilibrium profits of trader , . ]levels that are in some cases approximately a factor of ten less than those of traders .preist and van tol introduced a revised version of , which we call , and reported faster convergence to equilibrium and robustness to changes in parameter configuration .other learning methods have been adopted to design even more complex trading strategies than and its variants .roth and erev proposed a reinforcement - based stimuli - response strategy , which we call re .traders adapt their trading behavior in successive auction rounds by using their profits in the last round as a reward signal .gjerstad and dickhaut suggested a best - response - based strategy , which is commonly referred to as gd .traders keep a sliding window of the history of the shouts and transactions and calculate the probabilities of their offers being accepted at different prices .the traders use a cubic interpolation on the shouts and transaction prices in the sliding window in order to compute the probability of future shouts being accepted .they then use this to calculate the expected profit of those shouts .the expected profit at a price is the product of the probability of the price being accepted and the difference between the price and the private value .traders then always choose to bid or ask at a price that maximizes their expected profit .is the most computation - intensive trading strategy considered so far , and indeed generates the best record both for allocative efficiency and the speed of convergence to equilibrium compared to the other trading strategies in literature. by way of indicating typical efficiencies achieved in a cda , figure [ fig : eff ] shows the trend of the overall efficiencies of homogeneous cdas lasting 10 days with 50 rounds per day in which 10 buyers and 10 sellers all use the same strategy , one of : tt , , for more information .] zip , re , and gd . the results are averaged over 400 iterations and obtained in jasa the extensible java - based auction simulation environment .figure [ fig : ds ] gives the supply and demand schedules in the markets .all the above empirical works have employed either human traders or homogeneous trading agents , demonstrating high efficiency and fast convergence to equilibrium , and some of this work has also produced theoretical results .it is however necessary to see how an auction works populated by heterogeneous trading subjects .there are both theoretical and practical reasons for considering heterogeneous traders . as rust _argued in : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although current theories of markets have provided important insight into the nature of trading strategies and price formation , it is fair to say that none of them has provided a satisfactory resolution of hayek s problem " . in particular , current theories assume a substantial degree of implicit coordination by requiring that traders have common knowledge of each other s strategies ( in game - theoretic models ) , or by assuming that all traders use the same strategy ( in learning models ) .little is known theoretically about price formation in markets populated by heterogeneous traders with limited knowledge of their opponents . ...the assumption that players have common knowledge of each other s beliefs and strategies ... presumes an unreasonably high degree of implicit coordination amongst the traders ... game theory also assumes that there is no _ a priori _ bound on traders ability to compute their strategies .however , even traders with infinite , costless computing capabilities may still decide to deviate from their strategies if they believe that limitations of other traders force them to use a sub - optimal strategy ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ they went on to argue that and other strategies striking performance strongly suggests that the nice properties have more to do with the market mechanism itself than the rationality of traders .in addition , strategies that are more individually rational than may display less collective rationality since clever strategies can exploit unsophisticated ones such as and so that a more - intelligent extra - marginal trader has more chances to finagle a transaction with an intra - marginal traders , causing market efficiency to fall . to observe heterogeneous auctions , the santa fe double auction tournament ( ) was held in 1990 and prizes were offered to entrants in proportion to the trading profits earned by their programs over the course of the tournament .30 programs from researchers in various fields and industry participated .the majority of the programs encoded the entrant s market intuition " using simple rules of thumb .the top - ranked program was , named after the entrant . and the runner - up strategy are remarkably similar . bothwait in the background and let the others do the negotiating , but when bid and ask get sufficiently close , jump in and steal the deal " .the overall efficiency levels in the markets used in the tournaments originally appear to be somewhat lower than that observed in experimental markets with human traders , but experiments without the last - placed players produced an efficiency of around 97% .this is further evidence that the properties of traders also affect the outcome of markets to some extent .besides high efficiency levels and convergence to competitive equilibrium , other stylized facts " of human markets observed in the include : reductions in transaction - price volatility and efficiency losses in successive trading days that seem to reflect apparent learning effects , coexistence of extra - marginal and intra - marginal efficiency losses , and low - rank correlations between the _ realized order of transactions _ and the _ efficient order_. thorough examination of efficiency losses in the tournaments and later experiments indicates that the success of is due to its patience in waiting to exploit the intelligence or stupidity of other trading strategies .the volume of e - commerce nowadays creates another motivation for evaluating trading strategies in a heterogeneous environment .electronic agents , on behalf of their human owners , can automatically make strategic decisions and respond quickly to the changes in various kinds of markets . in the foreseeable future , these agents will have to compete with a variety of agents using a range of trading strategies and human traders . as more complex trading strategies appear , it is natural to speculate on how these electronic minds will compete against their human counterparts ._ ran a series of s allowing persistent orders populated by a mixed population of automated agents ( using modified and strategies ) and human traders .they found that though the efficiency of the s was comparable with prior research , the agents outperformed the humans in all the experiments , obtaining about 20% more profit ._ speculated that this was due to human errors or weakness , and human traders were observed to improve their performance as they got familiar with using the trading software .et al . _ also suggested that the weaknesses of trading agents may be found when human experts take them on and thus improvement can be made to the algorithms of the trading agents .tesauro and das executed experiments with both homogeneous and heterogeneous trading agents with varying trader population composition , making it possible to gain more insights into the relative competitiveness of trading strategies . in either the so - called one - in - many " tests or balanced - group " tests , and ( and their variants ) exhibited superior performance over and even when the market mechanisms vary to some extent .furthermore , , a variant of due to das _ , outperformed all the other strategies .the above approaches nevertheless all employ a fixed competition environment . in practice ,when a strategy dominates others , it tends to flourish and be adopted by more people .are the first that we are aware of to conduct evolutionary experiments , where the relative numbers of the different trading strategies changed over time , so that more profitable strategies became more numerous than less profitable ones .such an analysis revealed that although agents outperformed others when traders of different types are approximately evenly distributed , they later exhibited low overall efficiency as they became the majority , making the evolution process a cycle of ups and downs .walsh _ et al . _ gave a more formal analysis combining the game - theoretic solution concept of and _ replicator dynamics_. they treated heuristic strategies , rather than the atomic actions like a bid or ask , as primitive , and computed expected payoffs of each individual strategy at certain points of the joint heuristic strategy space .this method reduced the model of the game from a potentially very complex , multi - stage game to a one - shot game in normal form . at points where one strategy gains more than others , replicator dynamics dictates that the whole population moves to a nearby point where the winning strategy takes a larger fraction of the population .this process continues until an equilibrium point is reached where either the population becomes homogeneous or all strategies are equally competitive in terms of their expected payoffs .there may be multiple equilibrium points ` absorbing ' areas of different sizes , _ basins _ of the equilibria , which together compose the whole strategy space .in particular , figure [ fig : walsh - before - perturbation ] shows the replicator dynamics of a market with three strategies . , , , and are all equilibrium points , but and are not stable since a small deviation from them will lead to one of the other equilibria .the triangle field gives an overview of the interaction of the three strategies and their relative competitiveness .what s more , a technique called _ perturbation analysis _ is used to evaluate the potential to improve on a strategy .figure [ fig : walsh - after - perturbation ] shows the replicator dynamics of the same strategies after small portions of both and s payoffs were shifted to .such a shift significantly changed the landscape of the space , and dominated in most of possible combinations .this showed that a ` tiny ' improvement on the strategy may greatly affect its competition against the other strategies .phelps _ et al . _ took a similar approach in comparing the , , and strategies , showed the potential of , and demonstrated that a modified strategy could be evolved by optimizing its learning component .the main drawback of this approach is an exponential dependence on the number of strategies , which limits its applicability to real - world domains where there are potentially many heuristic strategies . proposed information theoretic approaches to deliberately choose the sample points in the strategy space through an interleaving of equilibrium calculations and payoff refinement , thus reducing the number of samples required .designing heuristic strategies to a great extent depends on the intelligence and experience of the strategy designer .prior studies have also demonstrated that heuristic strategies performance hinges on the selection of parameter values .automatic optimization is preferable in this sense to find best parameter combinations and further identify better strategies .cliff and phelps _ et al ._ are the pioneers in this work , adopting evolutionary computation to address the challenge .a _ genetic algorithm _ ( or ) is a search technique used in computing to find true or approximate solutions to optimization and search problems .genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as _ inheritance _ , _ mutation _ , _ selection _ , and _ crossover _ .a typical genetic algorithm requires two things to be defined : 1 .a _ genetic representation _ of the solution domain , also called the _ genotype _ or _ chromosome _ of the solution species , 2 . a _ fitness function _ to evaluate the solution domain .a standard representation of the solution is as an array of bits .arrays of other types and structures can be used in essentially the same way . the main property that makes these genetic representations convenientis that their parts are easily aligned due to their fixed size , which facilitates simple crossover operation .variable length representations have also been used , but crossover implementation is more complex in this case .the fitness function is defined over the genetic representation of a solution and measures the quality of the solution .the fitness function is always problem dependent .for instance , in the knapsack problem we want to maximize the total value of objects that we can put in a knapsack of some fixed capacity .a representation of a solution might be an array of bits , where each bit represents a different object , and the value of the bit ( 0 or 1 ) represents whether or not the object is in the knapsack .not every such representation is valid , as the size of objects may exceed the capacity of the knapsack .the fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid , or 0 otherwise . in some problems ,it is hard or even impossible to define the fitness expression ; in these cases , interactive genetic algorithms are used .once we have the genetic representation and the fitness function defined , the proceeds to initialize a population of solutions randomly , then improve it through repetitive application of mutation , crossover , and selection operators .cliff addressed the labor - intensive manual parameter optimization for the strategy , automatically optimizing parameter selection using a .he identified eight parameters in : lower and upper bounds of the learning rate ( how fast to move towards the target ) , momentum ( how much past momentum to carry over ) , and initial profit margin , and the upper bounds of the ranges defining the distributions of absolute and relative perturbations on learned prices , respectively denoted as and .these real parameters make an eight - dimensional space and any parameter value combination corresponds to a point in that space .the vector of the eight parameters defines an ideal genotype .phelps _ et al ._ took a step further along this track .they combined the replicator - dynamics - based heuristic strategy analysis method in and a , identified a strategy as the basis for optimization , and successfully evolved the strategy and acquired an optimized strategy that can beat , commonly considered the most competitive strategy . since it is not realistic to seek best " , or even good " , strategies that can beat all potential opponents because an absolutely dominating strategy does not appear to exist in the trading scenario since the performance of a strategy depends greatly on the types of the opponents proposed using a small finite population of randomly sampled strategies to approximate the game with an infinite strategy population consisting of a mixture of all possible strategies . in particular , , , and were chosen as sample strategies . following the heuristic strategy analysis and perturbation method in , was found to have the potential to dominate and .the strategy uses reinforcement learning to choose from possible profit margins over the agent s private value based on a reward signal computed as a function of profits earned in the previous round of bidding .potentially , the learning algorithm may be replaced by a number of learning algorithms , including ( stateless q - learning ) , ( a modified version of used in ) , and ( a control algorithm which selects a uniformly random action regardless of reward signal ) .then encoded the genotype to select any of these algorithms together with their parameters .the evolutionary search procedure they used is similar to cliff s except that the individuals in a generation are evaluated again with the heuristic strategy analysis approach and the basin size is used as a measure of fitness .the experiment finally found a algorithm with a particular parameter combination , which together with composes the nash equilibrium that captures 97% of the strategy space populated by the learned strategy , , , and . the trading agent competition ( )was organized to promote and encourage high quality research into trading agents . under the umbrella ,a series of competitions have been held , including two types of game , lassic and .lassic sets up a travel agent " scenario based on complex procurement in multiple simultaneous auctions .each travel agent ( an entrant to the competition ) has the goal of assembling travel packages ( from town to tampa , during a notional multi - day period ) .each agent is acting on behalf of a certain number of clients , who express their preferences for various aspects of the trip .the objective of the travel agent is to maximize the total satisfaction of its clients ( the sum of the client utilities ) .was designed to capture many of the challenges involved in supporting dynamic supply chain practices in the industry of pc manufacturing .supply chain management is concerned with planning and coordinating the activities of organizations across the supply chain , from raw material procurement to the delivery of finished goods . in todays global economy , effective supply chain management is vital to the competitiveness of manufacturing enterprizes as it directly impacts their ability to meet changing market demands in a timely and cost effective manner .in , agents are simulations of small manufacturers , who must compete with each other for both supplies and customers , and manage inventories and production facilities .mechanism design applied to auctions explores how to design the rules that govern auctions to obtain specific goals .the story of trading strategies in the preceding section is only one facet of the research on auctions .gode and sunder s results suggest that auction mechanisms play an important role in determining the outcome of an auction , and this is further bourne out by the work of walsh _ et al . _ , which also points out that results hinge on both auction design and the mix of trading strategies used . according to classical auction theory ,if an auction is _ strategy - proof _ or _ incentive compatible _ , traders need not bother to conceal their private values and in such auctions complex trading agents are not required .however , typical s are not strategy - proof .mcafee has derived a form of double auction that is strategy - proof , though this strategy - proofness comes at the cost of lower efficiency . despite the success of analytic approaches to the relatively simple auctions presented in section [ sec : auction - theory ] ,the high complexity of the dynamics of some other auction types , especially s , makes it difficult to go further in using analytical methods . as a result, researchers turned to empirical approaches using machine learning techniques , sometimes combined with methods from traditional game theory . instead of trying to design optimal auction mechanisms ,the computational approach looks for relatively good auctions and aims to make them better , in a noisy economic environment with traders that are not perfectly rational .one can think of different forms of auctions as employing variations of a common set of the auction rules , forming a parameterized auction space .et al . _ and others parameterized auction rules using the following classification : * bidding rules : determine the semantic content of messages , the authority to place certain types of bids , and admissibility criteria for submission and withdrawal of bids . ** how many sellers and buyers are there ? * * are both groups allowed to make shouts ? * * how is a shout expressed ? * * does a shout have to beat the corresponding market quote if one exists ? * information revelation : * * when and what market quotes are generated and announced ? * * are shouts visible to all traders ?* clearing policy : * * when does clearing a market take place ? * * when does a market close ? * * how are shouts matched ? * * how is a transaction price determined ? the idea of parameterizing auction space not only eases the heuristic auction mechanism design , but also makes it possible to ` search ' for better mechanisms in an automated manner .it is not yet clear how auction design , and thus the choice of parameter values , contributes to the observed performance of auctions .thus it is not clear how to create an auction with a particular specification . it _ is _ possible to design simple mechanisms in a provably correct manner from a specification , as shown by conitzer and sandholm .however it is not clear that this kind of approach can be extended to mechanisms as complex as s. as a result , it seems that we will have to design double auction mechanisms experimentally , at least for the foreseeable future .of course , doing things experimentally does not solve the general problem .a typical experimental approach is to fix all but one parameter , creating a one - dimensional space , and then measure performance across a number of discrete sample points in the space , obtaining a fitness landscape that is expected to show how the factor in question correlates to a certain type of performance and how the auction can be optimized by tweaking the value of that factor . in other words ,the experimental approach examines one small part of a mechanism and tries to optimize that part .the situation is complicated when more than one factor needs to be taken into consideration the search space then becomes complex and multiple dimensional , and the computation required to map and search it quickly becomes prohibitive . instead of manual search, some researchers have used evolutionary computation to automate mechanism design in a way that is similar to the evolutionary approach to optimizing trading strategies .cliff explored a continuous space of auction mechanisms by varying the probability of the next shout ( at any point in time ) being made by a seller , denoted by .the continuum includes the cda ( ) and also two purely single - sided mechanisms that are similar to the english auction ( ) and the dutch auction ( ) .cliff s experiments used genetic algorithms and found that a that corresponds to a completely new kind of auction led to a better value than that obtained for other markets using zip traders ._ and the same authors but in a different order continued with this work , showing that the approach is also effective in markets using zi - c traders , and the new irregular " mechanisms can lead to high efficiency with a range of different supply and demand schedules as well . the visualization of fitness landscapes , using plots including 3d histograms and contours , is also noteworthy .byde took a similar approach in studying the space of auction mechanisms between the first and second - price sealed - bid auctions .the winner s payment is determined as a weighted average of the two highest bids , with the weighting determined by the auction parameter . for a given population of bidders ,the revenue - maximizing parameter is approximated by considering a number of parameter choices over the allowed range , using a to learn the parameters of the bidders strategies for each choice , and observing the resulting average revenues . for different bidder populations ( varying bidder counts , risk sensitivity , and correlation of signals ) ,different auction parameter values are found to maximize revenue . taking another tack , phelps _ et al ._ explored the use of genetic programming to determine auction mechanism rules automatically ._ genetic programming _ ( or ) , another form of evolutionary computation that is similar to s , evolves programs ( or expressions ) rather than the binary strings evolved in s. this makes automatic programming possible , and in theory allows even more flexibility and effectiveness in finding optimal solutions in the domain of concern . in, programs are traditionally encoded as _ tree structures_. every tree node has an operator function and every terminal node has an operand , making mathematical expressions easy to evolve and evaluate . with tree structures ,crossover is applied on an individual by simply switching one of its nodes with another node from another individual in the population .mutation can replace a whole node in the selected individual , or it can replace just the information of that node . replacinga node means replacing the whole branch .this adds greater effectiveness to the crossover and mutation operators .phelps _ et al ._ demonstrated how can be used to find an optimal point in a space of pricing policies , where the notion of optimality is based on allocative efficiency and trader market power . in markets , there are two popular pricing policies : the - pricing rule and the uniform pricing policy .the former is clearly a discriminatory policy and may be represented as : where $ ] , and and are ask and bid prices . the latter executes all transactions at the same price , typically the middle point of the interval between the market ask and bid quotes . searching in the space of arithmetic combinations of shout prices and market quotes including the above two rules as special cases , led to a complex expression that is virtually indistinguishable from the version of the - pricing rule .this shows that the middle - point transaction pricing rule not only reflects the traditional practice but also can be technically justified . noting that the performance of an auction mechanism always depends on the mix of traders participating in the mechanism , and both the auction mechanism and the trading strategies may adapt themselves simultaneously , phelps _ et al . _ further investigated the use of co - evolution in optimizing auction mechanisms .they first co - evolved buyer and seller strategies and then together with auction mechanisms .the approach was able to produce outcomes with reasonable efficiency in both cases .phelps _ et al ._ proposed a novel way to evaluate and compare the performances of market mechanisms using heuristic strategy analysis . despite the fact that the performance of an auction mechanism may vary significantly when the mechanism engages different sets of trading agents , previous research on auctions analyzed the properties of markets using an arbitrary selection of homogeneous trading strategies .a more sound approach is to find the equilibria of the game between the participating trading strategies and measure the auction mechanism at those equilibrium points . as sections [ sec :trading : interaction - heterogeneous ] and [ sec : trading : automating : phelps ] have discussed , the heuristic strategy analysis calculates equilibria among a representative collection of strategies .this makes the method ideal for measuring market mechanisms at those relatively stable equilibria .the representative strategies selected by phelps _included , , and .the replicator dynamics analysis revealed that : ( 1 ) neither the nor the mechanism is strategy - proof since is not dominant in either market ; ( 2 ) increasing the number of agents in the led to the appearance of an equilibrium basin for an equilibrium near , which agreed with the conclusion drawn through the approximate analysis in discussed in section [ sec : auction - theory : da ] ; and ( 3 ) the has higher efficiency than the in the sense that the three equilibrium points in the dynamics field for the all generate 100% efficiency while the only equilibrium for produces 98% efficiency .one can interpret the small efficiency difference as justifying the s use of a rather than a for faster transactions and higher volumes .one avenue of future research is to combine this evaluation method with evolutionary computation to optimize mechanisms . considering that the information about the population of traders is usually unknown to the auction mechanism , and many analytic methods depend on specific assumptions about traders , pardoe and stone advocated a self - adapting auction mechanism that adjusts auction parameters in response to past auction results .their framework includes an _ evaluator _ module , which can create an auction mechanism for online use , can monitor the performance of the mechanism , and can use the economic properties of the mechanism as feedback to guide the discovery of better parameter combinations .this process then creates better auction mechanisms that continue to interact with traders which are themselves possibly evolving at the same time . a classic algorithm for -armed bandit problems , -greedy ,is used in the evaluator module to make decisions on parameter value selection .this work differs from previous work in the sense that here auction mechanisms are optimized during their operation while the mechanisms in the approaches discussed before find remain static and are assumed to perform well even when they face a set of traders that is different from those used in searching for the mechanisms . following the lassic and the competitions introduced in section [ sec : trading : tac ] , a new competition calledwas run in the summer of 2007 in order to foster research on auction mechanism design . in ,the software trading agents are created by the organizers of the competition , and entrants compete by defining rules for matching buyers and sellers and setting commission fees for providing this service .entrants compete against each other in attracting buyers and sellers and making profits .this is achieved by having effective matching rules and setting appropriate fees that are a good trade - off between making profit and attracting traders .we developed , based on phelps s , to run as the game server .it provides various trading strategies , market selection strategies , and market mechanism frameworks to avoid entrants working from scratch .is also an ideal experimental platform for researchers to evaluate auction mechanisms in a competition setting .this report aims to provide an overview of the field of auction mechanism design and build the foundation for further research .auctions are markets with strict regulations where traders negotiate and make deals. an auction may be single - sided or double - sided depending upon whether only sellers or only buyers can make offers or whether both can . the four standard single - sided auctions english auction , dutch auction , first- and second - price sealed - bid auctions have been the subject of traditional auction theory .vickrey s pioneering work in this area led to the revenue equilibrium theorem that shows a seller can expect equal profits on average from all the standard types of auctions with a few assumptions about the bidders .other researchers followed the approach and managed to extend the applicability of the theorem when the assumptions are relaxed .double - sided auctions , which are important in the business world , posed a bigger challenge due to the higher complexity of their structure and the interaction between traders . while classical mathematical approaches have continued to be successful in analyzing some simple types of double auctions , they have been unable to apply to more practical scenarios .smith and others initiated experimental approaches and showed that double auctions , even with a handful of traders , may lead to high allocative efficiency and the transaction prices quickly converge to the expected equilibrium price .subsequent experiments with human and/or artificial traders tried to explain what led to these desirable properties and tended to show that auction mechanisms played a major role , though the intelligence of traders had an effect as well. further work , on the one hand , introduced more and more complex trading strategies not only making higher individual profits but also improving the collective properties of auctions . on the other hand ,different methods have been explored to design novel auction mechanisms .one approach is to evolve parameterized auction mechanisms based on evolutionary computation .et al . _ have found a new variant of continuous double auctions through evolving mechanisms that converge more quickly to equilibrium , and also exhibit higher efficiency than those previously known .et al . _ have explored the use of genetic programming and justified the traditional mid - point transaction pricing rule as optimizing efficiency while balancing trader market power .in addition to these off - line techniques for optimization through evolutionary computing , online approaches have been proposed to produce adaptive auction mechanisms , which , with dynamic trader populations , can continuously monitor and improve their performance . with the understanding of this prior research work ,what can be done further at the interface of computer science and economics include : obtaining more insights into double - sided auction mechanisms , inventing novel auction rules , and searching for optimal combinations of various kinds of policies , automatically producing desirable auction mechanisms .dave cliff .evolutionary optimization of parameter sets for adaptive software - agent traders in continuous double auction markets .technical report , hewlett - packard research laboratories , bristol , england , 2001 .vincent conitzer and tuomas sandholm .automated mechanism design : complexity results stemming from the single - agent setting . in _ the 5th international conference on electronic commerce ( icec03 ) _ , pages 1724 , pittsburgh , pa , usa , september 2003 .vincent conitzer and tuomas sandholm .an algorithm for automatically designing deterministic mechanisms without payments . in _ third international joint conference on autonomous agents and multiagent systems - volume 1 ( aamas04 ) _ , pages 128135 , new york city , ny , usa , july 2004 .rajarshi das , james e. hanson , jeffrey o. kephart , and gerald tesauro .agent - human interactions in the continuous double auction . in_ proceedings of the 17th international joint conference on artificial intelligence _ ,seattle , u.s.a ., august 2001 .simon parsons , mark klein , and juan antonio rodriguez - aguilar .a bluffer s guide to auctions .technical report , center for coordination science , sloan school of management , massachusetts institute of technology , 2004 . research note .steve phelps , marek marcinkiewicz , simon parsons , and peter mcburney .using population - based search and evolutionary game theory to acquire better - response strategies for the double - auction market . in _ proceedings of ijcai-05 workshop on trading agent design and analysis ( tada-05 )_ , 2005 .steve phelps , marek marcinkiewicz , simon parsons , and peter mcburney .a novel method for automatic strategy acquisition in n - player non - zero - sum games . in _ proceedings of the fifth international joint conference on autonomous agents and multi - agent systems ( aamas06 )_ , pages 705712 , new york , ny , usa , 2006 .acm press .steve phelps , peter mcburney , simon parsons , and elizabeth sklar .co - evolutionary auction mechanism design : a preliminary report . in_ proceedings of workshop on agent mediated electronic commerce iv ( amec iv ) _, 2002 .steve phelps , simon parsons , and peter mcburney .an evolutionary game - theoretic comparison of two double - auction market designs . in _ proceedings of workshop onagent mediated electronic commerce vi ( amec vi ) _ , 2004 .steve phelps , simon parsons , elizabeth sklar , and peter mcburney . using genetic programming to optimise pricing rules for a double auction market . in _ proceedings of the workshop on agents for electronic commerce , _ ,pittsburgh , pa , 2003 .chris preist and maarten van tol .adaptive agents in a persistent shout double auction . in_ proceedings of the 1st international conference on information and computation economies _ , pages 1118 .acm press , 1998 .william walsh , rajarshi das , gerald tesauro , and jeffrey o. kephart .analyzing complex strategic interactions in multi - agent systems . in piotr gmytrasiewicz and simonparsons , editors , _ proceedings of 2002 workshop on game - theoretic and decision - theoretic agents ( gtdt-02 ) _ , edmonton , alberta canada , july 2002 .william e. walsh , david c. parkes , and rajarshi das .choosing samples to compute heuristic - strategy nash equilibrium . in _aamas 2003 workshop on agent mediated electronic commerce _ , melbourne , australia , 2003 . | auctions are markets with strict regulations governing the information available to traders in the market and the possible actions they can take . since well designed auctions achieve desirable economic outcomes , they have been widely used in solving real - world optimization problems , and in structuring stock or futures exchanges . auctions also provide a very valuable testing - ground for economic theory , and they play an important role in computer - based control systems . auction mechanism design aims to manipulate the rules of an auction in order to achieve specific goals . economists traditionally use mathematical methods , mainly game theory , to analyze auctions and design new auction forms . however , due to the high complexity of auctions , the mathematical models are typically simplified to obtain results , and this makes it difficult to apply results derived from such models to market environments in the real world . as a result , researchers are turning to empirical approaches . this report aims to survey the theoretical and empirical approaches to designing auction mechanisms and trading strategies with more weights on empirical ones , and build the foundation for further research in the field . = 1 |
nanotechnology has the potential to revolutionize health care . a current example is enhanced imaging with nanoscale particles .future possibilities include programmable machines comparable in size to cells .such microscopic robots ( `` nanorobots '' ) could provide significant medical benefits . realizing these benefits requires fabricating the robots cheaply and in large numbers .such fabrication is beyond current technology , but could result from ongoing progress in developing nanoscale devices .one approach is engineering biological systems , e.g. , rna - based logic inside cells and bacteria attached to nanoparticles .however , biological organisms have limited material properties and computational speed . instead , we consider machines based on plausible extensions of currently demonstrated nanoscale electronics , sensors and motors and relying on directed assembly .these components enable nonbiological robots that are stronger , faster and more flexibly programmed than is possible with biological organisms .a major challenge for nanorobots arises from the physics of their microenvironments , which differ in several significant respects from today s larger robots .first , the robots will often operate in fluids containing many moving objects , such as cells , dominated by viscous forces .second , thermal noise is a significant source of sensor error and brownian motion limits the ability to follow precisely specified paths .finally , power significantly constrains the robots , especially for long - term applications where robots may passively monitor for specific rare conditions ( e.g. , injury or infection ) and must respond rapidly when those conditions occur .individual robots moving passively with the circulation can approach within a few cell diameters of most tissue cells of the body . to enable passing through even the smallest vessels ,the robots must be at most a few microns in diameter .this small size limits the capabilities of individual robots . for tasks requiring greater capabilities, robots could form aggregates by using self - assembly protocols . for robots reaching tissues through the circulation ,the simplest aggregates are formed on the inner wall of the vessel .robots could also aggregate in tissue spaces outside small blood vessels by exiting capillaries via diapedesis , a process similar to that used by immune cells .aggregates of robots in one location for an extended period of time could be useful in a variety of tasks .for example , they could improve diagnosis by combining multiple measurements of chemicals . using these measurements, the aggregate could give precise temporal and spatial control of drug release as an extension of an _ in vitro _ demonstration using dna computers .using chemical signals , the robots could affect behavior of nearby tissue cells .for such communication , molecules on the robot s surface could mimic existing signalling molecules to bind to receptors on the cell surface .examples include activating nerve cells and initiating immune response , which could in turn amplify the actions of robots by recruiting cells to aid in the treatment .such actions would be a small - scale analog of robots affecting self - organized behavior of groups of organisms .aggregates could also monitor processes that take place over long periods of time , such as electrical activity ( e.g. , from nearby nerve cells ) , thereby extending capabilities of devices tethered to nanowires introduced through the circulatory system . in these cases ,the robots will likely need to remain on station for tens of minutes to a few hours or even longer. the aggregate itself could be part of the treatment by providing structural support , e.g. , in rapid response to injured blood vessels .aggregates could perform precise microsurgery at the scale of individual cells , extending surgical capabilities of simpler nanoscale devices .since biological processes often involve activities at molecular , cell , tissue and organ levels , such microsurgery could complement conventional surgery at larger scales .for instance , a few millimeter - scale manipulators , built from micromachine ( mems ) technology , and a population of microscopic devices could act simultaneously at tissue and cellular size scales , e.g. , for nerve repair . for medical tasks of limited duration, onboard fuel created during robot manufacture could suffice . otherwise , the robots need energy drawn from their environment , such as converting externally generated vibrations to electricity or chemical generators .power and a coarse level of control can be combined by using an external source , e.g. , light , to activate chemicals in the fluid to power the machines in specific locations , similar to nanoparticle activation during photodynamic therapy , or by using localized thermal , acoustic or chemical demarcation .this paper examines generating power for long - term robot activity from reacting glucose and oxygen , which are both available in the blood .such a power source is analogous to bacteria - based fuel cells whose enzymes enable full oxidation of glucose .we describe a computationally feasible model incorporating aspects of microenvironments with significant effect on robot performance but not previously considered in robot designs , e.g. , kinetic time constants determining how rapidly chemical concentrations adjust to robot operations . as a specific scenario , we focus on modest numbers of robots aggregated in capillaries .a second question we consider is how the robots affect surrounding tissue .locally , the robots compete for oxygen with the tissue and also physically block diffusion out of the capillary . robot power generation results in waste heat , which could locally heat the tissuethe robot oxygen consumption could also have longer range effects by depleting oxygen carried in passing red blood cells . in the remainder of this paper , we present a model of the key physical properties relevant to power generation for robots using oxygen and glucose in the blood plasma . using this model , we then evaluate the steady - state power generation capabilities of aggregated robots and how they influence surrounding tissue .[ sect.model ] we consider microscopic robots using oxygen and glucose available in blood plasma as the robots power source .this scenario involves fluid flow , chemical diffusion , power generation from reacting chemicals and waste heat production . except for the simplest geometries ,behaviors must be computed numerically , e.g. , via the finite element method .computational feasibility requires a choice between level of detail of modeling individual devices and the scale of the simulation , both in number of devices and physical size of the environment considered . for microscopic biological environments relevant for nanorobots ,detailed physical properties may not be known or measurable with current technology , thereby limiting the level of detail possible to specify .this section describes our model .the simplifying approximations are similar to those used in biophysical models of microscopic environments , such as oxygen transport in small blood vessels with diffusion into surrounding tissue .we focus on steady - state behavior indicating long - term robot performance when averaged over short - term changes in the local environment such as individual blood cells ( exclusively erythrocytes , not white cells or platelets unless noted otherwise ) passing the robots .[ sect.geometry ] [ cols="^,^ " , ] fig .[ fig.oxygen concentration ] shows the distribution of oxygen in the tissue and plasma in the vessel near the robots .the robots reduce the local oxygen concentration far more than the surrounding tissue , as seen by comparing with the vessel without robots .most of the extra oxygen used by the robots comes from the passing blood cells , which have about 100 times the oxygen concentration of the plasma . within the vessel with the robots , the concentration in the plasma is lowest in the fluid next to the robots .downstream of the robots is a recovery region where the concentration increases a bit as cells respond to the abruptly lowered concentration near the robots . in the low demand scenario ,the concentration in the vessel just downstream of the robots is somewhat lower than in the surrounding tissue .thus in this region , the net movement of oxygen is from the tissue into the vessel , where the fluid motion transports the oxygen somewhat downstream before it diffuses back into the tissue . in effect, part of the oxygen entering the vessel travels through the tissue around robots to the downstream section of the vessel , in contrast to the pattern without robots where oxygen is always moving from the vessel into the surrounding tissue .the streamlines in fig .[ fig.oxygen concentration ] show that the laminar flow speeds up as the fluid passes through the narrower vessel section where the robots are stationed .oxygen concentration , in , along a radial cross section from the center of the vessel to the outer edge of the tissue region .the cross section is in the middle of the modeled section of vessel and tissue , corresponding to a vertical line in the center of each plot of fig .[ fig.oxygen concentration ] .the gray area indicates the interior of the vessel .the left and right plots correspond to the low and high demand scenarios of table [ table.scenarios ] . in each plot , the upper curve is for the vessel without robots and the lower curve is for the vessel containing the ringset with pumps . for comparison ,the dashed curves are solutions to the krogh model corresponding to the vessel without robots.,title="fig : " ] oxygen concentration , in , along a radial cross section from the center of the vessel to the outer edge of the tissue region .the cross section is in the middle of the modeled section of vessel and tissue , corresponding to a vertical line in the center of each plot of fig .[ fig.oxygen concentration ] .the gray area indicates the interior of the vessel .the left and right plots correspond to the low and high demand scenarios of table [ table.scenarios ] . in each plot , the upper curve is for the vessel without robots and the lower curve is for the vessel containing the ringset with pumps . for comparison ,the dashed curves are solutions to the krogh model corresponding to the vessel without robots.,title="fig : " ] fig .[ fig.co2 section ] gives another view of how the robots affect the oxygen concentration in the surrounding tissue .the concentration is zero at the robot surface facing into the vessel .the robots decrease the oxygen concentration somewhat but do not affect tissue power generation much since the concentration remains well above the threshold where power generation drops significantly , i.e. , given in table [ table.parameters ] .however , at large distances from the vessel in the high demand scenario oxygen concentration is low enough to significantly decrease tissue power production .this low level of oxygen also occurs when there are no robots .[ fig.co2 section ] includes comparison with the simpler krogh model of oxygen transport to tissue from vessels without robots .the krogh model assumes constant power density in the tissue and no diffusion along the vessel direction in the tissue . for the low demand scenario ,the krogh model results are close to those from our model .however in the high demand case , the krogh model has oxygen concentration drop to zero about from the vessel , due to the unrealistic assumption of constant power use rather than the decrease in power use at low concentrations given by eq .( [ eq.tissue reaction rate ] ) .oxygen flux to the robots ranges from about to with the zero - concentration boundary condition .estimates of pump capabilities are up to , which is more than 100 times the actual flux to the robots .such pumps could thereby maintain the zero concentration boundary condition . at an energy use of , the pumps would require about per robot to handle the incoming flux , slightly reducing the power benefit of the pumps .however , much of this pumping energy may be recoverable by adding a generator using the subsequent expansion of the reaction products to their lower partial pressure outside the robot .this section describes the steady - state power available to the robots according to our model in various scenarios .we first discuss the average per robot power in the aggregate , for both high and low capacity cases , which also indicates the total power available to the aggregate as a whole .we then show how the power is distributed among the robots , based on their location in the ringset .finally , we illustrate the qualitative features of these results in a simpler , analytically - solvable model to identify key scaling relationships between robot design choices and power availability .l|cc|cc|cc|cc|cc|cc robot power generation capacity & & + inlet concentration ( ) ' '' '' & & & + pressure gradient ( ) ' '' '' & & & & & & + tissue power demand ( ) ' '' '' & 4 & 60 & 4 & 60 & 4 & 60 & 4 & 60 & 4 & 60 & 4 & 60 + 10-micron ringset ( with pumps ) & 12 & 8 & 14 & 12 & * 17 * & 11 & 24 & * 18 * & * 17 * & 11 & 24 & * 18 * + 10-micron ringset ( free diffusion ) & 11 & 7 & 12 & 10 & * 15 * & 10 & 22 & * 16 * & * 6 * & 3 & 8 & * 6 * + 1-micron ring ( with pumps ) & 44 & 27 & 49 & 36 & * 69 * & 36 & 99 & * 58 * & * 69 * & 36 & 99 & * 58 * + 1-micron ring ( free diffusion ) & 31 & 19 & 34 & 25 & * 49 * & 25 & 71 & * 38 * & * 9 * & 4 & 12 & * 7 * + table [ table.behavior : power ] gives the average power generated using the available oxygen , per robot within the aggregate . as expected , robots receive more oxygen and hence can generate more power when inlet concentration is high , fluid speed is high or tissue power demand is low . in the first two cases , the flow brings oxygen through the vessel more quickly ; in the last case , surrounding tissue removes less oxygen .the less than 2-fold decrease in robot power generation in the face of a larger 2.5-fold decrease in inlet concentration from the arterial to the venous end of the capillary shows that robots extract more oxygen from red cells than these cells would normally release while passing the length of the vessel .thus robots get some of their oxygen as `` new oxygen '' rather than just taking it from what the tissues would normally get .this is possible because in this case robots create steeper concentration gradients than the tissue does .the 10-micron ringset with pumps produces about the same power in the low and high demand scenarios , consuming oxygen at . comparing the different aggregate sizes shows lower power generation , per robot , in the large aggregate compared to the small onethis arises from the competition among nearby robots for the oxygen .nevertheless the larger aggregate , with ten times as many robots , generates several times as much power in aggregate as the smaller one .this difference identifies a design choice for aggregation : larger aggregates have more total power available but less on a per robot basis .robots using pumps generate only modestly more power than robots relying on diffusion alone in our high capacity design example ( section [ sect.robot power ] ) . in this case , for robots without pumps , the power generation site density , , is sufficiently large that oxygen molecules diffusing into the robot are mostly consumed by the power generators near the surface of the robot before they have a chance to diffuse back out of the robot .for such robots , power generators far from the plasma - facing surface receive very little oxygen and hence do not add significantly to the robot power production .pumps give higher benefit for isolated rings of robots than for tightly clustered aggregates .although not evaluated in the axially symmetric model used here , pumps may be even more significant for a single isolated robot on the vessel wall .such a robot would not be competing with any other robots for the available oxygen , though would still compete with nearby tissue .the low capacity robots have only the maximum power generating capability of the high capacity robots discussed above .nevertheless , each robot s maximum power is several times larger than the limit due to available oxygen .thus pumps allow the robots to produce the same power as given in table [ table.behavior : power ] for the high capacity robots .the pumps ensure the absorbed oxygen is completely used by the smaller number of reaction sites by increasing the concentration of oxygen within the robots so eq . ([ eq.reaction rate ] ) gives the same power generation in spite of the smaller value of .on the other hand , the smaller number of reaction sites is a significant limitation for robots without pumps . comparing with table [ table.behavior : power ] shows pumps improve the average power by factors of about 3 and 8 for the 10 and 1-micron ringsets , respectively . comparing with high capacity robots without pumps shows the factor of 50 reduction in reaction sites only reduces average power by factors of about 3 and 5 for the 10 and 1-micron ringsets , respectively .thus the reaction sites in the low capacity scenario are used more effectively than in the high capacity robots : with a smaller number of sites , each site does not compete as much with nearby sites for the available oxygen .much of the power in robots without pumps is generated near the plasma - facing surface , where oxygen concentration is largest . in our case ,the power for the high capacity robots is generated primarily within of the robot surface .this observation suggests that a design with power generating sites placed near this surface instead of uniformly throughout the robot volume , as we have assumed , could significantly improve power generation for robots without pumps .for example , placing all the reaction sites uniformly within the of the robot volume nearest the surface would increase the local reaction site density in that volume by a factor of 50 . for low capacity robots , this placement would increase to the same value as the high capacity case , but only in a narrow volume , within of the surface with the robot geometry we use . elsewhere in the robot with this design . while we might expect this concentration to increase power significantly ,in fact we find only a small increase ( e.g. , 12% for the 10-micron ringset in the low demand scenario ) . thus concentrating the reaction sites nearthe plasma - facing surface does not offer much of a performance advantage .[ sect.power : distribution ] steady - state power generation , in picowatts , for robots as a function of their position along the vessel wall , starting from those at the upstream end of the ringset ( position 1 ) and continuing to those at the downstream end ( position 10 ) . the charts , corresponding to the low and high demand scenarios of table [ table.scenarios ] , compare robots with pumps , absorbing all oxygen reaching them , with robots relying on free diffusion ( i.e. , without pumps ) , and high and low capacity robots .for robots with pumps , power for high and low capacity cases are the same.,title="fig : " ] steady - state power generation , in picowatts , for robots as a function of their position along the vessel wall , starting from those at the upstream end of the ringset ( position 1 ) and continuing to those at the downstream end ( position 10 ) . the charts , corresponding to the low and high demand scenarios of table [ table.scenarios ] , compare robots with pumps , absorbing all oxygen reaching them , with robots relying on free diffusion ( i.e. , without pumps ) , and high and low capacity robots .for robots with pumps , power for high and low capacity cases are the same.,title="fig : " ] while all robots in a single ring have the same power due to the assumption of axial symmetry , fig .[ fig.power generation ] shows that power varies with ring position in the 10-micron ringset .the robots at the upstream edge of the aggregate receive more oxygen than the other robots and hence produce more power .power generation does not decrease monotonically along the vessel : robots at the downstream edge have somewhat more available oxygen than those in the middle of the aggregate since robots at the edge of the aggregate have less competition for oxygen .[ fig.power generation ] shows significantly larger benefits of pumps at the edges of multi - ring aggregates than in their middle sections , especially in the low demand scenario . in the scenarios described above, robots produce power from all the available oxygen .this is appropriate for applications requiring as much power as possible for the aggregate as a whole . at the other extreme, an application requiring the same behavior from all robots in the aggregate would be limited by the robots with the least available power .this would be the case for identical robots , all of which perform the same task and hence use the same power . in this case , the robots could increase performance by transferring power from those at the edges of the aggregate to those in the middle .such transfer could take place after generation , e.g. , via shared electric current , or prior to generation by transfer of oxygen among neighboring robots .however , such internal transfer would require additional hardware capabilities . for robots with pumps ,an alternative transfer method is for robots near the edge of the aggregate to run their pumps at lower capacity and thus avoid collecting all the oxygen arriving at their surfaces .this uncollected oxygen would then be available for other robots , though some of this oxygen would be transported by the fluid past the robots or be captured by the tissue rather than other robots .this approach increases power to robots in the middle of the aggregate without requiring additional hardware for internal transfers between robots , but at the cost of somewhat lower total power for the aggregate . increasing as much as possible the power to the robots with the least power leads to a uniform distribution of power among the robots . to quantify the trade - off between total power and its uniformity among the robots, we consider all robots setting each of the pumps on their surfaces to operate at the same rate and the pumps uniformly distributed over the surface .this gives a uniform flux of oxygen over the entire surface of all the robots .the largest possible value for this uniform flux , and hence the largest power for the aggregate , occurs when the minimum oxygen concentration on the robot surfaces is zero at that point , the robot whose surface includes the location of zero concentration can not further increase its uniform flux . for example , in the low demand scenario the maximum value for this uniform flux is approximately .compared to the situation in fig .[ fig.power generation ] , this uniform flux gives significantly lower power ( 39% leading ring , 56% trailing ring ) for the robots at the edges of the aggregate , somewhat lower power for robots in positions 2 and 3 ( 87% and 98% , respectively ) and somewhat more power ( ranging from 104% to 116% ) for the other robots .the combination of these changes gives a total of of the power for the aggregate when every robot collects all the oxygen reaching its surface .the minimum power per robot increases from to .robots could slightly increase power by accepting some nonuniformity of flux over each surface while maintaining the same total flux to each robot . this would occur when the minimum oxygen concentration on the entire length of the robot surfaces in a particular robot ring is zero . using this approach to uniform power in practice would require the robots to determine the maximum rate they can operate their pumps while achieving uniform power distribution .this rate would vary with tissue demand , and also over time as cells pass the robots .a simple control protocol is for each robot to adjust its pump rate up or down according to whether its power generation is below or above that of its neighbors , respectively .when concentration reaches zero on one robot , increasing pump rate at that location would not increase power generation .communicating information for this protocol is likely simpler than the hardware required to internally transfer power or oxygen among robots , but also requires each robot is able to measure its power generation rate .such measurements and communication would give an effective control provided they operate rapidly compared to the time over which oxygen flux changes , e.g. , as cells pass the robots on millisecond time scales .longer reaction times could lead to oscillations or chaotic behavior .a second approach to achieving a more uniform distribution of power is to space the robots at some distance from each other on the vessel wall .this approach would be suitable if the aggregated robots do not need physical contact to achieve their task .for example , somewhat separating the robots would allow a relatively small number to span a distance along the vessel wall larger than the size of a single cell passing through the vessel .this aggregate would always have at least some robots between successive cells .communicating sensor readings among the robots would then ensure the response , e.g. , releasing chemicals , is not affected by misleading sensor values due to the passage of a single cell , giving greater stability and reliability without the need for delaying response due to averaging over sensor readings as an alternative approach to accounting for passing cells .another example for spaced robots is for directional acoustic communication , at distances of about .achieving directional control requires acoustic sources extending over distances comparable to or larger than the sound wavelength .plausible acoustic communication between nanorobots involves wavelengths of tens of microns . as a quantitative example of the benefit of spacing robots in the context of our axially symmetric model, we consider a set of rings of robots spaced apart along the vessel wall . when the distance between successive rings is sufficiently large , the power for each ring would be close to that of the isolated 1-micron ring given in table [ table.behavior : power ] .for example , the power for the low demand scenario in the 1-micron ring of high - capacity robots decreases from at high inlet concentration to at low inlet concentration , which spans the range of power for a modest number of widely spaced 1-micron rings within a single vessel .as described in section [ sect.tissue power and heating ] , oxygen absorption by robots can affect concentration over a few tens of microns upstream of those robots .thus separating robot rings by , say , will give power close to that of the isolated rings , with a gradual decrease in power for successive rings due to the decreasing cell saturation along the vessel . a third approach to reducing variation in robot power , on average ,is through changing pump rates in time .for example , adjacent nanorobot rings could operate with counterphased 50% duty cycles , with one ring and its second nearest neighbor ring using pumps while the intervening nearest neighbor has its pumps off and does not absorb oxygen .the alternating rings of robots would switch pumps on and off . in this case, robots would have larger power than seen in fig . [ fig.power generation ] for the half of the time they are active , and zero power for the other half .this temporal approach would not be suitable for tasks requiring all robots to have the same power _ simultaneously _ , but would be useful for tasks requiring higher burst power from robots throughout the aggregate where the robots are unable to store oxygen or power for later use .provided the duty cycle is sufficiently long , our steady - state model can quantify the resulting power distribution .for example , in the low demand scenario , total flux for the aggregate is 79% of that when every robot collects all the oxygen reaching its surface , and the minimum power per robot _ drops _ from to .thus , when averaged over the duty cycle , this temporal technique reduces total power without benefiting the robots receiving the minimum power . in this case, the temporal approach does not improve minimum robot power ( on average ) since the power gain to a robot while its neighbors are off is less than a factor of 2 , which does not compensate for the loss due to each robot being off for half the time . applying the steady - state model to this temporal variation in robot activity requires the duty cycle be long enough for the system to reach steady - state behavior after each switch between the active subset of robots , and that the switching time is short compared to the duty cycle so most of the robots power arises during the steady - state portions of the cycle between switching .diffusion provides one lower bound on this time : when neighboring robots switch pumps from on to off or vice versa , the characteristic diffusion time for oxygen over the distance between next nearest neighbors ( one micron ) is about .adjustments in cell saturation for the 1-micron shift in the location of the active robots between each half of the duty cycle is a further limitation on the duty cycle time for the validity of the steady - state model , though this is likely to be minimal since the cells are separated from the robots by the plasma gap in the fluid . since the steady - state model averages over the position of passing cells , another lower bound on the duty cycle arises from the time for a cell to pass the robots . from the speeds in table [ table.parameters ] ,this time is at least .the dependence of robot power on design parameters described above may appear contrary to simple intuitions .first , one might expect that the 10-micron ringset , with ten times the surface area in contact with the plasma , would absorb about ten times as much oxygen as the 1-micron ring .instead we find only about a factor of 2 to 4 increase .second , the benefit of pumps , less than a factor of 2 for the high capacity robots , may seem surprisingly small .third , the low capacity robots , with the reaction sites of the high capacity robots , nevertheless generate about as much power as high capacity robots in the case with no pumps . andfinally , in spite of the higher concentration near the robot surface than deep inside the robot when there are no pumps , increasing the reaction site density by placing all the reaction sites near the robot surface gives little benefit .while the specific values of these designs depend on the geometry and environment used in our model , these general features of small robots obtaining power through diffusion apply in other situations as well . in this sectionwe illustrate how these consequences of design choices arise in the context of a scenario for which the diffusion equation has a simple analytic solution , thereby identifying key physical effects leading to these behaviors . specifically, we consider an isolated spherical robot of radius in a stationary fluid with oxygen concentration far from the sphere .such a sphere with a fully absorbing surface collects oxygen at a rate .this expression illustrates a key property of diffusive capture : the rate depends not on the object s surface area but on its size .this behavior , which also applies to other shapes , arises because while larger objects have greater surface areas they also encounter smaller concentration gradients . as a quantitative example , taking the sphere to have the same volume as the robot , i.e. , given in table [ table.robot parameters ] , the oxygen absorbed by the sphere generates and for equal to the low and high inlet oxygen concentrations in the plasma from table [ table.parameters ] , respectively . these power values are larger than for robots on the vessel wall described above .unlike the sphere in a stationary fluid , the aggregated robots compete with each other for the oxygen , the fluid moves some of the oxygen past the robots before they have a chance to absorb it , and the surrounding tissue also consumes some of the oxygen .the replenishment of oxygen from the passing blood cells is not sufficient to counterbalance these effects .the spherical robot also indicates the benefit of pumps .the fully absorbing sphere , with a zero concentration boundary condition at the surface , corresponds to using pumps . for robots without pumps ,an approximation to eq .( [ eq.reaction rate ] ) allows a simple solution .specifically , since the michaelis - menten constant for the robot power generators , , is much larger than the oxygen concentrations ( e.g. , as seen in fig . [ fig.oxygen concentration ] ) , robot power generation from eq .( [ eq.reaction rate ] ) is approximately .dividing by gives the oxygen consumption rate density as where .solving the diffusion equation , eq .( [ eq.diffusion ] ) , for a sphere in a stationary fluid with concentration far from the sphere , with free diffusion through the sphere s surface and reaction rate density inside the sphere , gives the rate oxygen is absorbed by the sphere ( and hence reacted to produce power ) as where .thus free diffusion produces a fraction of the power produced by the fully absorbing sphere .the distance is roughly the average distance an oxygen molecule diffuses in the time a power generation site consumes an oxygen molecule .when a freely diffusing molecule inside the robot has a high chance to diffuse out of the robot before it reacts ( large compared to ) , is small so pumps provide a significant increase in power .conversely , when is small compared to , pumps provide little benefit : the large number of reaction sites ensure the robot consumes almost all the diffusing oxygen reaching its surface .this argument illustrates a tradeoff between using pumps to keep oxygen within the robot and the number of power generators .in particular , if internal reaction sites are easy to implement , then robots with many reaction sites and no pumps would be a reasonable design choice .conversely , if reaction sites are difficult to implement while pumps are easy , then robots with pumps and few reaction sites would be a better choice .a caveat for robots with few power generating sites is that eq .( [ eq.f mu ] ) applies when oxygen consumption is linear in the concentration , as given by .this expression allows arbitrarily increasing the reaction rate by increasing the concentration , no matter how small the number of reaction sites .this linearity is a good approximation of eq .( [ eq.reaction rate ] ) only when . at larger concentrations the power density saturates at .when is sufficiently small , this limit is below the power that could be produced from all the oxygen that a fully absorbing sphere collects .thus , in practice , the benefit of using pumps estimated from the linear reaction rate , , is limited by this bound when is small . as an example , for a spherical robot with the high capacity reaction site density of table [ table.robot parameters ] , and , with the fairly modest benefit of pumps .the low capacity robots have and with . in this case , the limit due to the maximum reaction rate of eq . ([ eq.reaction rate ] ) applies , somewhat limiting the benefit of pumps to a factor of , but pumps still offer considerable benefit .these values for the benefits of pumps are somewhat larger than seen with our model for robots on the vessel wall .nevertheless , the spherical example identifies the key physical properties influencing power generation with and without pumps , and how they vary with robot design choices .( [ eq.f mu ] ) also illustrates why power in the low capacity robots is not as small as one might expect based on the reduction in reaction sites by a factor of 50 .while the value of is proportional to , the typical diffusion distance varies as , so a decrease in reaction sites by a factor of 50 only increases by about a factor of 7 .the square root dependence arises from the fundamental property of diffusion : typical distance a diffusing particle travels grows only with the square root of the time .the modest change in diffusion distance , combined with eq .( [ eq.f mu ] ) , gives a smaller decrease in power than the factor of 50 decrease in capacity .the low capacity robot has higher concentrations throughout the sphere , so each reaction site operates more rapidly than in the high capacity case .this increase partially offsets the decrease in the number of reaction sites . without pumps , the higher oxygen concentration near the sphere s surface than nearits center means much of the power generation takes place close to the surface .thus we can expect an increase in power by placing reaction sites close to the surface rather than uniformly distributed throughout the sphere .consistent with the results from the model described in section [ sect.model ] , evaluating eq .( [ eq.diffusion ] ) with the reaction confined to a spherical shell shows only a modest benefit compared to a uniform distribution . the benefit is larger for a thinner shell andis determined by the same ratio , , appearing in eq .( [ eq.f mu ] ) .in particular , the largest benefit of using a thin shell , only , occurs for .the parameters for the high and low capacity robots are somewhat below this optimal value , giving only and less than benefit from a thin shell , respectively , for the sphere .these modest improvements correspond to the small benefits of using a thin shell seen in the solution to our model for both high and low capacity robots .hence the solution of eq .( [ eq.diffusion ] ) for the sphere illustrates how , with a fixed number of reaction sites , concentrating them near the robot surface provides only limited benefit .the benefit of the higher reaction site density in the shell is almost entirely offset by the shorter distance molecules need to diffuse to escape from the thin reactive region .that is , the benefit of placing all the reaction sites in a thin shell arises from two competing effects .when is large ( low capacity ) the concentration is only slightly higher near the surface than well inside the sphere .so there is little benefit from placing the reaction sites closer to the surface . on the other hand ,when is small ( high capacity ) , even uniformly distributed reaction sites manage to consume most of the arriving oxygen , giving near - zero concentration at the surface of the sphere and little scope for further improvement by concentrating the reaction sites .thus the largest , though still modest , benefit for a shell design is for intermediate values of .deviation from equilibrium oxygen saturation in cells , , along the boundary between the cell and cell - free portions of the fluid illustrated in fig .[ fig.geometry ] , as a function of distance along that boundary .a deviation of zero indicates the oxygen held in the cells is in perfect equilibrium with the surrounding plasma .saturation ranges between 0 and 1 .the curves correspond to the low and high demand scenarios of table [ table.scenarios ] , when robots are present ( upper curves ) or absent ( lower curves ) . for the vessel without robots and low demand , is indistinguishable from zero on the scale of the plot .the gray band indicates the 10-micron length of the vessel wall in which the robots are stationed , within the 60-micron length of the capillary illustrated . ]the high power density of the robots creates a steep gradient of oxygen concentration in the plasma .thus , unlike the minor role for nonequilibrium oxygen release in tissue , the small size of the robots makes passing red cells vary significantly from equilibrium with the concentration in the plasma .[ fig.s - sequib ] illustrates this behavior , using one measure of the amount of disequilibration : the difference between saturation and the equilibrium value corresponding to the local concentration of oxygen in the plasma , as given by eq .( [ eq.equilibrium saturation ] ) .we compare with a vessel without robots , in which the blood cells remain close to equilibrium .[ fig.s - sequib ] shows that the kinetics of oxygen release from red cells plays an important role in limiting the oxygen available to the robots .however , the region of significant disequilibration is fairly small , extending only a few microns from the robots .[ sect.tissue power and heating ] power density in tissue next to the vessel wall relative to maximum demand , i.e. , the ratio from eq .( [ eq.tissue reaction rate ] ) , as a function of position along the vessel .the curves are for the low and high demand scenarios of table [ table.scenarios ] .the two lines at the top are for the vessel without robots and the lower curves are for the 10-micron ringsets . in each case, the curve with higher values corresponds to the low power demand scenario .the gray band indicates the 10-micron length of the vessel wall in which the robots are stationed , within the 60-micron length of the capillary illustrated . ]cell saturation as a function of distance along the vessel .saturation ranges between 0 and 1 .the curves correspond to the low and high demand scenarios of table [ table.scenarios ] , with the upper curves of each pair corresponding to a vessel without robots .the gray band indicates the 10-micron length of the vessel wall in which the robots are stationed , within the 60-micron length of the capillary illustrated . ]the robots affect tissue power in two ways .first , the robots compete for oxygen with nearby tissue .second , the robots consume oxygen from passing blood cells , thereby leaving less for tissue downstream of the robots . for the effect on nearby tissue , fig .[ fig.tissue power ] shows how tissue power density varies next to the vessel wall . in the vessel without robots ,power density declines slightly with distance along the vessel as the tissue consumes oxygen from the blood .the total reduction in tissue power density is fairly modest , less than 10% even for high power demand in the tissues . the relative reduction is less for tissue at larger distances from the vessel , though such tissue has lower power generation due to less oxygen reaching tissue far from the vessel .this reduction arises both from direct competition by the robots for available oxygen and the physical blockage of the capillary wall , forcing surrounding tissue to rely on oxygen diffusing a longer distance from unblocked sections of the wall . in the low demand case , direct competition is the major factor , as seen by the dips in the power density at each end of the aggregate , where the absorbing flux is highest . in the high demand case ,the tissue s consumption reduces the amount of oxygen diffusing through the tissue on either side of the aggregate , giving the larger drop in tissue power density in the middle of the aggregate . for longer range consequences ,[ fig.saturation ] shows how the oxygen saturation in the blood cells changes as they pass the robots . slowly moving cells ( in the low demand scenario )are substantially depleted while passing the robots , even though tissue power demand in this scenario is low .this depletion arises from the cells remaining near the robots a relatively long time as cells move slowly with the fluid .the resulting saturation shown in the figure , around , is below the equilibrium saturation ( ) for typical concentrations at the venous end of capillaries , given in table [ table.parameters ] .thus in the low demand scenario , the robots remove more oxygen from passing cells than occurs during their full transit of a vessel containing no robots . in this scenario ,the tissue has low power demand , so the depletion of oxygen from the cells may have limited effect on tissue along the vessel downstream of the robots .however , this reduction could significantly limit the number of robots that can be simultaneously present inside a given capillary .another observation from fig .[ fig.saturation ] is a significant decrease in cell saturation a short distance _ upstream _ of the robots in the low demand scenario. we can understand this behavior in terms of the peclet number , which characterizes the relative importance of convection and diffusion over various distances .in particular , is the distance at which diffusion and convection have about the same effect on mass transport in a moving fluid . at significantly longer distances , convection is the dominant effect and absorption of oxygen at a given location in the vessel has little effect on upstream concentrations over such distances . in our scenarios , ranges from ( high demand ) to ( low demand ) .thus the oxygen concentration in the plasma is significantly affected by the robots over a few tens of microns upstream of their location .cell saturation remains close to equilibrium in this upstream region ( fig .[ fig.s - sequib ] ) , hence the reduced oxygen concentration in the plasma lowers cell saturation in this region upstream of the robots ( fig .[ fig.saturation ] ) .this distance is also relevant for spacing rings of robots far enough apart to achieve nearly uniform power , as described in section [ sect.power : distribution ] .the devices in this example have a volume of so the robot power generation corresponds to power densities around , several orders of magnitude larger than power densities in tissue , raising concerns of possible significant tissue heating by the robots .however , for the isolated aggregate used in this scenario , waste heat due to the robots power generation is rapidly removed , resulting in negligible maximum temperature elevation of about .the robots change the fluid flow by constricting the vessel . with the same pressure difference as a vessel without robots , as used in our model , this constriction results in somewhat lower flow speed through the vessel . specifically , the one and ten - micron long aggregates reduce flow speed by 6% and 20% , respectively .the fluid moving past the robots exerts a force on them . to remain on the wallthe robots must resist this force through their attachment to the vessel wall .this force is a combination of pressure difference , between the upstream and downstream ends of the aggregate , and viscous drag . for the laminar flow the force is linear in the pressure gradient imposed on the vessel : where and for the and ringsets , respectively .for example , the flow imposes a force of on the ringset when the pressure gradient is .the ringset experiences about three times the force of the ring , but covers ten times the surface area .thus the larger aggregate requires about one - third the attachment force per robot .applied forces can affect cells .in particular , endothelial cells use forces as a trigger for new vessel growth , which is important for modeling changes in the vessels over longer time scales than we consider in this paper .[ sect.discussion ] the scenarios of this paper illustrate how various physical properties affect robot power generation . robots about one micron in size positioned in rings on capillary walls could generate a few tens of picowatts in steady state from oxygen and glucose scavenged locally from the bloodstream .aggregates can combine their oxygen intake for tasks requiring higher sustained power generation .the resulting high power densities do not significantly heat the surrounding tissue , but do introduce steep gradients in oxygen concentration due to the relatively slow reaction kinetics of oxygen release from red cells .the robots reduce oxygen concentration in nearby tissues , but generally not enough to significantly affect tissue power generation .the fraction of the generated power available for useful activity within the robot depends on the efficiency of the glucose engine design , with a reasonable estimate for fuel cells .the robots will have of usable steady - state power while on the vessel wall .as one indication of the usefulness of this power for computation , current nanoscale electronics and sensors have an energy cost per logic operation or sensor switching event of a few hundred .while future technology should enable lower energy use , even with the available power from circulating oxygen could support several million computational operations per second . at the size of these robots , significant movements of blood cells andchemical transport occur on millisecond time scales .thus the power could support thousands of computational operations , e.g. , for chemical pattern recognition , in this time frame .the aggregated robots could share sensor information and cpu cycles , thereby increasing this capability by a factor of tens to hundreds .the robots need not generate power as fast as they receive oxygen , but could instead store oxygen received over time to enable bursts of activity as they detect events of interest. robots with pumps have a significant advantage in burst - power applications because pumps enable long - term high - concentration onboard gas accumulation to support brief periods of near maximal power generation . as an example , in our scenarios individual robots with pumps receive about .if instead of using this oxygen for immediate power generation , the robot stored the oxygen received over one second , it would have enough to run the power generators at near maximal rate ( giving about ) for several milliseconds .by contrast , robots without pumps would only have a modest benefit from oxygen diffusing into the robot , achieving a concentration equal to the ambient concentration in the surrounding plasma as given in table [ table.parameters ] . this concentration could only support generating several hundred picowatts for about a tenth of a millisecond . thus while pumps may give only modest improvement for steady - state power generation , they can significantly increase power available in short bursts . our model could be extended to estimate the amount of onboard storage that would be required to avoid pathological conditions related to competition between tissues and nanorobots . in particular , larger aggregates would deplete oxygen over longer distances for which diffusion through the tissue from upstream of the robots would be insufficient . furthermore, larger aggregates of tightly - spaced robots would block transport from the capillary into the surrounding tissue even if the robots did not use much oxygen .onboard oxygen storage would allow higher transient power densities for the robots , though this could lead to heating issues for larger aggregates .to estimate the potential for onboard storage , a ringset containing 200 robots with volume of which 10% is devoted to compressed storage at 1000 atmospheres at body temperature can store about molecules of in the aggregate .the incoming flow in the capillary provides about molecules per second , depending on the flow speed , of which about to is available to the tissue and robots .this means the oxygen stored in the aggregate is equivalent to only several seconds of oxygen delivery through the vessel .thus oxygen storage in the robots themselves can not significantly increase mission duration , though such storage might be useful for short - term ( i.e. , a few seconds ) load leveling functions ( e.g. , maintaining function during temporary capillary blockage due to white cell passage ) .alternatively , the aggregated robots could have oxygen supplemented with a modest circulating population of respirocytes , i.e. , spherical robots able to carry oxygen to tissues far more effectively than red blood cells . such robots would continuously and entirely eliminate any oxygen depletion regions in the tissue due to robot power generation , and allow higher robot power generation since oxygen would no longer be such a limiting factor .such machines could not only carry significantly more oxygen than red blood cells , but would also respond more quickly to abrupt decreases in partial pressure due to consumption by aggregated robots on vessel walls .for example , sensors should be able to detect the drop in concentration of the size we see near the robots e.g. , ( or about ) within a millisecond .this time is short enough that the machines will have moved only about a micron and so will still be near the robots .once they detect the pressure drop , the machines could release oxygen rapidly , up to , while passing near the aggregated robots .however , in practice the release rate is constrained by the effervescence limit in plasma to about .an interesting question for future work is evaluating how much of this released oxygen reaches the robots on the vessel wall , which will depend on how close to the vessel wall the fluid places the respirocytes .in this situation , aggregated robots could also communicate to passing respirocytes to activate or suppress their oxygen delivery , depending on the task at hand .thus both the oxygen handling capabilities of respirocytes , giving faster kinetics than red cells and larger storage capacity , and the possibility of communication provide examples of the flexibility of small devices with programmable control .moreover , this scenario illustrates the benefits of mixing robots with differing hardware capabilities .our model considers static aggregates on vessel walls , but could be extended to study power availability for aggregates that move along the walls .another significant scenario is robots moving passively with the fluid , where they could draw oxygen from the surrounding plasma .the oxygen unloading model used here could evaluate how rapidly nearby cells would replenish oxygen in the plasma as the cells and robots move through the capillary .we treat environmental parameters ( e.g. , fluid flow speed and tissue oxygen demand ) as fixed by the surroundings . beyond the local changes in the robots environment described by the model , sustained use of these robots could induce larger scale responses .for example , the increased use of oxygen by the robots could lead to increased blood flow , as occurs with , say , exercising muscles , by increased pressure to drive the fluid at higher speed or dilation of the vessels .the local oxygen deficits due to high robot power use are smaller in scale than higher tissue demand ( e.g. , from increased activity in a muscle ) .thus an important open question is whether localized robot oxygen consumption over a long period of time can initiate a less localized response to increase flow in the vessels .the possibility of large - scale responses to robot activity raises a broader issue for nanomedicine treatment design when technology allows altering the normal correlation among physical quantities used for signalling in the body .an example at larger scales is the response to low oxygen mediated by excess carbon dioxide in the blood , which can lead to edema and other difficulties for people at high altitudes . in terms of downstream consequences of the robots oxygen use, low saturation of cells leaving an isolated capillary should not be a problem because the bulk of oxygen exchange occurs in the capillary bed , not in the larger collecting vessels .however , cells reaching low saturation before exiting the capillary would produce localized anoxia in the tissue near the end of the capillary .this could be relieved in part by oxygen diffusion from neighboring tissue cells if the anoxic region is not too large or too severe .specific effects of such localized anoxia remain to be fully identified .whole capillaries subjected to ischemic conditions over a period of days remodel themselves , e.g. , by adding new vascular branches and by increasing the tortuosity of existing vessels .this observed behavior is likely to be a localized ( i.e. , cell - level ) response , hence we might expect such a response if a portion of a capillary downstream of the robots was driven into ischemic conditions .there could also be a localized inflammatory response to a large enough number of capillary - wall endothelial cells under stress , especially for cells stressed to the point of apoptosis , but moderate ischemia alone seems unlikely to generate this response .various chemicals ( e.g. , adrenalin ) make the heart pump faster and thus drive the blood at higher speed .other chemicals ( such as no , pgd2 and ltd4 ) dilate the vessels .these chemicals can produce significant activity in the endothelial cells that line ( and thus form the tube geometry of ) the capillary vessel , so their influence can be fairly direct and quick .similarly , a large robot population constantly drawing excess oxygen supply could induce elevated erythropoietin secretion ( if unregulated by the robots ) , increasing red cell production in the erythroid marrow .direct heating is not a problem with aggregates of the size considered here , in spite of their high power density compared to tissue . for the large aggregate we examined ( tightly covering along the vessel wall ) , oxygen diffusion through the tissue from regions upstream and downstream of the robots provided oxygen to the tissue outside the section of the vessel blocked by the robots .larger aggregates , especially if tightly packed , would significantly reduce oxygen in the tissues even if the robots used little power themselves , simply due to their covering the vessel wall over a long enough distance that diffusion through the tissue from unblocked regions is no longer effective .the inducement of nonlinear tissue thermal responses ( e.g. , inflammation or fever ) due to the heat generated by larger aggregates or multiple aggregates in nearby capillaries is an important question for future work .nanorobots parked or crawling along the luminal surface of the vessel may activate mechanosensory responses from the endothelial cells across whose surfaces the nanorobots touch .if the aggregates cover a long section of the vessel wall , they could produce local edemas since narrowing of the vessels by the presence of the nanorobots increases local pressure gradients and fluid velocities .while we focus on a single aggregate in one microscopic vessel , additional issues arise if a large population of circulating robots form many aggregates .in that case , the multiple aggregates will increase hydrodynamic resistance throughout the fluidic circuit .thus the robots could make the heart work slightly harder to pump fluid against the slightly higher load .moreover , if robot aggregates detach from the wall without complete disaggregation , these smaller aggregates moving in the blood may be large enough to block a small vessel .the scenarios examined in this paper can suggest suitable controls to distribute power when robots aggregate . moreover , power control decisions interact with the choices made for the aggregation process .for example , if the task requires a certain amount of total power for the aggregate ( e.g. , as a computation hub ) then the aggregation self - assembly protocol would depend on how much oxygen is available , e.g. , to make a larger aggregate in vessels with less available oxygen , or recruit more passing robots when the task needs more power .an example of this latter case could be if aggregates are used as computation hubs to validate responses to rare events : when local sensor readings indicate the possibility of such an event , the aggregate could temporarily recruit additional robots to increase power and computational capability for evaluating whether those readings warrant initiating treatment .another approach to designing controls for teams of robots is the formalism of partially observable markov processes .this formalism allows for arbitrarily complex computations among the robots to update their beliefs about their environment and other robots .unfortunately this generality leads to intractable computations for determining optimal control processes .for the situations we studied , the power constraints on capillary wall - resident microscopic robots operating with oxygen available _ in vivo _ means the local rules must be simple . including this constraint in the formalism could allow it to identify feasible control choices for large aggregates of microscopic robots in these situations .the power constraints from our model could provide useful parameters for less detailed models of the behavior of large numbers of robots in the circulation in the context of the scenarios examined in this paper .in particular , power limits the computation , communication and locomotion capabilities of the robots .these constraints could be incorporated in simplified models , such as cellular automata approaches to robot behavior .these automata are a set of simple machines , typically arranged on a regular lattice .each machine is capable of communicating with its neighbors on the lattice and updates its internal state based on a simple rule .for example , a two - dimensional scenario shows how robots could assemble structures using local rules .such models can help understand structures formed at various scales through simple local rules and some random motions .a related analysis technique considers swarms , i.e. , groups of many simple machines or biological organisms such as ants . in these systems , individuals use simple rules to determine their behavior from information about a limited portion of their environment and neighboring individuals .typically , individuals in swarms are not constrained to have a fixed set of neighbors but instead continually change their neighbors as they move .swarm models are well - suited to microscopic robots with their limited physical and computational capabilities and large numbers .most swarm studies focus on macroscopic robots or behaviors in abstract spaces . in spite of the simplified physics ,these studies show how local interactions among robots lead to various collective behaviors and provide broad design guidelines . a step toward more realistic , though still tractable, models of large aggregates could incorporate the power constraints from the model presented in this paper .in addition to evaluating performance of hypothetical medical nanorobots , theoretical studies identifying tradeoffs between control complexity and hardware capabilities can aid future fabrication .one example is the design complexity of the robot s fuel acquisition and utilization systems . for steady - state operation on vessel walls , we found limited benefit of pumps over free diffusion when numerous onboard power generators can be employed . in such cases ,our results indicate that a design without pumps does not sacrifice much performance .more generally , control can compensate for limited hardware ( e.g. , sensor errors or power limitations ) , providing design freedom to simplify the hardware through additional control programs .thus the studies could help determine minimum hardware performance capabilities needed to provide robust systems - level behavior .a key challenge for robot design studies based on approximate models is validating the results . in our case ,the most significant approximations are the treatment of cells as an averaged component in the fluid and the lumped - model kinetics for oxygen unloading . with increased computational power , numerical solution of more accurate modelscould test the validity of these approximations . as technologyadvances to constructing early versions of microscopic robots , experimental evaluations will supplement theoretical studies .one such experiment is operating the robots in manufactured microfluidic channels .this would test the robots ability to aggregate at chemically defined locations and generate power reliably from known chemical concentrations in the fluid .after such _ in vitro _experiments , early _ in vivo _ tests could involve robots acting as passive sensors in the circulatory system and aggregating at chemically distinctive locations. such nanorobots will be useful not only as diagnostic tools and sophisticated extensions to drug delivery capabilities , but also as an aid to develop robot designs and control methods for more active tasks .raf acknowledges private grant support for this work from the life extension foundation and the institute for molecular manufacturing .jie bao , keiji furumoto , kimitoshi fukunaga , and katsumi nakao .a kinetic study on air oxidation of glucose catalyzed by immobilized glucose oxidase for production of calcium gluconate ., 8:91102 , 2001 .anthony r. cassandra , leslie pack kaelbling , and michael l. littman .acting optimally in partially observable stochastic domains . in_ proc . of the 12th natl .conf . on artificial intelligence( aaai94 ) _ , pages 10231028 , menlo park , ca , 1994 .aaai press .heiko hamann , heinz worn , karl crailsheim , and thomas schmickl .spatial macroscopic models of a bio - inspired robotic swarm algorithm . in _ proc .conf . on intelligent robots and systems ( iros 2008)_. inria , 2008 .tad hogg and bernardo a. huberman .dynamics of large autonomous computational systems . in kagan tumer and david wolpert , editors , _ collectives and the design of complex systems _ , pages 295315 .springer , new york , 2004 .balazs l. keszler , istvan j. majoros , and james r. baker jr .molecular engineering in nanotechnology : structure and composition of multifunctional devices for medical application . in _ proc . of the ninth foresight conference on molecular nanotechnology _ , 2001 .scott p. leary , charles y. liu , and michael l. j. apuzzo . toward the emergence of nanoneurosurgery : part iii - nanomedicine : targeted nanotherapy , nanosurgery , and progress toward the realization of nanoneurosurgery ., 58:10091026 , 2006 .moustafa malki , antonio l. de lacey , nuria rodrguez , ricardo amils , and victor m. fernandez .preferential use of an anode as an electron acceptor by an acidophilic bacterium in the presence of oxygen ., 74:44724476 , 2008 .dominik szczerba , gabor szekely , and haymo kurz . a multiphysics model of capillary growth and remodeling .in v. n. alexandrov et al . , editors , _ proc . of iccs , part ii _ , pages 8693 , berlin , 2006 . springer . | the power available to microscopic robots ( nanorobots ) that oxidize bloodstream glucose while aggregated in circumferential rings on capillary walls is evaluated with a numerical model using axial symmetry and time - averaged release of oxygen from passing red blood cells . robots about one micron in size can produce up to several tens of picowatts , in steady - state , if they fully use oxygen reaching their surface from the blood plasma . robots with pumps and tanks for onboard oxygen storage could collect oxygen to support burst power demands two to three orders of magnitude larger . we evaluate effects of oxygen depletion and local heating on surrounding tissue . these results give the power constraints when robots rely entirely on ambient available oxygen and identify aspects of the robot design significantly affecting available power . more generally , our numerical model provides an approach to evaluating robot design choices for nanomedicine treatments in and near capillaries . * keywords : * nanomedicine , nanorobotics , capillary , power , numerical model , oxygen transport |
most methods for estimating translation models from parallel texts ( bitexts ) start with the following intuition : words that are translations of each other are more likely to appear in corresponding bitext regions than other pairs of words .the intuition is simple , but its correct exploitation turns out to be rather subtle .most of the literature on translation model estimation presumes that corresponding regions of the input bitexts are represented by neatly aligned segments . as discovered by ,most of the bitexts available today are not easy to align . moreover ,imposing an alignment relation on such bitexts is inefficient , because alignments can not capture crossing correspondences among text segments . proposed methods for producing general bitext maps for arbitrary bitexts .the present report shows how to use bitext maps and other information to construct a model of co - occurrence .a * model of co - occurrence * is a boolean predicate , which indicates whether a given pair of word _ tokens _ co - occur in corresponding regions of the bitext space .co - occurrence is a precondition for the possibility that two tokens might be mutual translations .models of co - occurrence are the glue that binds methods for mapping bitext correspondence with methods for estimating translation models into an integrated system for exploiting parallel texts . when the model of co - occurrence is modularized away from the translation model , it also becomes easier to study translation model estimation methods _ per se_. different models of co -occurrence are possible , depending on the kind of bitext map that is available , the language - specific information that is available , and the assumptions made about the nature of translational equivalence .the following three sections explore these three variables .by definition of `` mutual translations , '' corresponding regions of a text and its translation will contain word token pairs that are mutual translations .therefore , a general representation of bitext correspondence is the natural concept on which to build a model of where mutual translations co - occur .the most general representation of bitext correspondence is a bitext map .token pairs whose co - ordinates are part of the true bitext map ( tbm ) are mutual translations , by definition of the tbm .the likelihood that two tokens are mutual translations is inversely correlated with the distance between the tokens co - ordinate in the bitext space and the interpolated tbm .it may be possible to develop translation model estimation methods that take into account a probabilistic model of co - occurrence . however , all the models in the literature are based on a boolean co - occurrence model they want to know either that two tokens co - occur or that they do not. a boolean co - occurrence predicate can be defined by setting a threshold on the distance from the interpolated bitext map .any token pair whose co - ordinate is closer than to the bitext map would be considered to co - occur by this predicate .the optimal value of varies with the language pair , the bitext genre and the application .figure [ dcooc ] illustrates what i will call the * distance - based model of co - occurrence*. were the first to use a distance - based model of co - occurrence , although they measured the distance in words rather than in characters .general bitext mapping algorithms are a recent invention .so far , most researchers interested in co - occurrence of mutual translations have relied on bitexts where sentence boundaries ( or other text unit boundaries ) were easy to find .aligned text segments suggest a * boundary - based model of co - occurrence * , illustrated in figure [ scooc ] . for bitexts involving languages with similar word order , a more accurate * combined model of co - occurrence *can be built using both segment boundary information and the map - distance threshold .as shown in figure [ bcooc ] , each of these constraints eliminates the noise from a characteristic region of the bitext space .both the boundary - based and distance - based constraints restrict the region of the bitext space where tokens may be considered to co - occur . yet , these constraints do not answer the question of how to count co - occurrences within the restricted regions .it is somewhat surprising that this is a question at all , and most authors ignore it .however , when authors specify their algorithms in sufficient detail to answer this question , the most common answer ( given , , by * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) turns out to be unsound .the problem is easiest to illustrate under the boundary - based model of co - occurrence .given two aligned text segments , the naive way to count co - occurrences is where and are the frequencies of occurrence of and in their respective segments . formany * u * and * v * , and are either 0 or 1 , and equation [ naivecooc ] returns 1 just in case both words occur .the problem arises when and .for example , if , then according to equation [ naivecooc ] , ! if the two aligned segments are really translations of each other , then it is most likely that each of the occurrences of is a translation of just one of the occurrences of .although it may not be known which of the 3 s each corresponds to , the number of times that and co - occur as possible translations of each other in that segment pair must be 3 .there are various ways to arrive at .two of the simplest ways are \ ] ] and .\ ] ] equation [ mincount ] is based on the simplifying assumption that each word is translated to at most one other word . equation [ maxcount ] is based on the simplifying assumption that each word is translated to at least one other word . either simplifying assumption results in more plausible co - occurrence counts than the naive method in equation [ naivecooc ] . counting co - occurrences is more difficult under a distance - based co - occurrence model , because there are no aligned segments and consequently no useful definition for and . furthermore , under a distance - based co - occurrence model , the co - occurrence relation is not transitive . _e.g. _ , it is possible that co - occurs with , co - occurs with , co - occurs with , but does not co - occur with .the correct counting method becomes clearer if the problem is recast in graph - theoretic terms .let the words in each half of the bitext represent the vertices on one side of a bipartite graph .let there be edges between each pair of words whose co - ordinates are closer than to the bitext map .now , under the `` at most one '' assumption of equation [ mincount ] , each co - occurrence is represented by an edge in the graph s maximum matching . under the `` at least one '' assumption of equation [ maxcount ] , each co - occurrence is represented by an edge in the graph s smallest vertex cover .maximum matching can be computed in polynomial time for any graph .vertex cover can be solved in polynomial time for bipartite graphs .it is of no importance that maximum matchings and minimum vertex covers may be non - unique by definition , all solutions have the same number of edges , and this number is the correct co - occurrence count .co - occurrence is a universal precondition for translational equivalence among word tokens in bitexts .other preconditions may be imposed if certain language - specific resources are available .for example , parts of speech tend to be preserved in translation . if part - of - speech taggers are available for both languages in a bitext , and if cases where one part of speech is translated to another are not important for the intended application , then we can rule out the possibility of translational equivalence for all token pairs involving different parts of speech. a more obvious source of language - specific information is a machine - readable bilingual dictionary ( mrbd ) .if token in one half of the bitext is found to co - occur with token in the other half , and is an entry in the mrbd , then it is highly likely that the tokens and are indeed mutual translations . in this case, there is no point considering the co - occurrence of or with any other token .similarly exclusive candidacy can be granted to cognate token pairs .most published translation models treat co - occurrence counts as counts of potential link tokens .more accurate models may result if the co - occurrence counts are biased with language - specific knowledge . without loss of generality , whenever translation models refer to co - occurrence counts , they can refer to co - occurrence counts that have been filtered using whatever language - specific resources happen to be available .it does not matter if there are dependencies among the different knowledge sources , as long as each is used as a simple filter on the co - occurrence relation .in this short report , i have investigated methods for modeling word token co - occurrence in parallel texts ( bitexts ) .models of co - occurrence are a precursor to all the most accurate translation models in the literature .so far , most researchers have relied on only a restricted form of co - occurrence , based on a restricted kind of bitext map , applicable to only a limited class of bitexts .a more general co - occurrence model can be based on any bitext map , and thus on any bitext . the correct method for counting the number of times that two words co - occur turns out to be rather subtle , especially for more general co - occurrence models . as noted in section [ countmethod ] , many published translation models have been based on flawed models of co - occurrence .this report has exposed the flaw and has shown how to fix it .( 1995 ) `` a pattern matching method for finding noun and proper noun translations from noisy parallel corpora , '' _ proceedings of the 33rd annual meeting of the association for computational linguistics_. boston , ma .a. kumano & h. hirakawa .( 1994 ) `` building an mt dictionary from parallel texts based on linguistic and statistical information , '' _ proceedings of the 15th international conference on computational linguistics_. kyoto , japan .j. kupiec .( 1993 ) `` an algorithm for finding noun phrase correspondences in bilingual corpora , '' _ proceedings of the 31st annual meeting of the association for computational linguistics_. columbus , oh . h. papageorgiou , l. cranias & s. piperidis .( 1994 ) `` automatic alignment in parallel corpora , '' _ proceedings of the 32nd annual meeting of the association for computational linguistics ( student session)_. las cruces , nm . | a * model of co - occurrence * in bitext is a boolean predicate that indicates whether a given pair of word _ tokens _ co - occur in corresponding regions of the bitext space . co - occurrence is a precondition for the possibility that two tokens might be mutual translations . models of co - occurrence are the glue that binds methods for mapping bitext correspondence with methods for estimating translation models into an integrated system for exploiting parallel texts . different models of co - occurrence are possible , depending on the kind of bitext map that is available , the language - specific information that is available , and the assumptions made about the nature of translational equivalence . although most statistical translation models are based on models of co - occurrence , modeling co - occurrence correctly is more difficult than it may at first appear . |
analyzing whether a time series is stationary or is a non - stationary random walk ( unit root process ) in the sense that the first order differences form a stationary series is an important issue in time series analysis , particularly in econometrics .often the task is to test the unit root null hypothesis against the alternative of stationarity at a pre - specified level , which ensures that a decision in favor of stationarity is statistically significant .for instance , the equilibrium analysis of macroeconomic variables as established by granger ( 1981 ) and engle and granger ( 1987 ) defines an equilibrium of two random walks as the existence of stationary linear combination .when analyzing equilibrium errors of a cointegration relationship , rejection of the null hypothesis in favor of stationarity means that the decision to believe in a valid equilibrium is statistically justified at the pre - specified level . for an approach where cusum based residual tests are employed to test the null hypothesis of cointegration , we refer to xiao and phillips ( 2002 ) .their test uses residuals calculated from the full sample . in the present articlewe study sequential monitoring procedures which aim at monitoring a time series until a time horizon to detect stationarity as soon as possible .the question whether a time series is stationary or a random walk is also of considerable importance to choose a valid method when analyzing the series to detect trends .such procedures usually assume stationarity , see steland ( 2004 , 2005a ) , pawlak et al .( 2004 ) , huskov ( 1999 ) , huskov and slab ( 2001 ) , ferger ( 1993 , 1995 ) , among others . as shown in steland ( 2005b ) ,when using nadaraya - watson type smoothers to detect drifts the limiting distributions for the random walk case differ substantially from the case of a stationary time series . to detect changes in a process ora misspecified model , a common approach originating in statistical quality control is to formulate an in - control model ( null hypothesis ) and an out - of - control model ( alternative ) , and to apply appropriate control charts resp .stopping times . given a time series a monitoring procedure with time horizon ( maximum sample size ) given by a stopping time using the convention , where , called _ control statistic _ , is a -measurable -valued statistic sensitive for the alternatives of interest , and is a measurable set such that has small probability under the null model and high probability under the alternative of interest . in most cases is of the form or for some given _ control limit _ ( critical value ) . to design monitoring procedures ,the standard approach is to choose the control limit to ensure that the average run length ( arl ) , , is greater or equal to some pre - specified value .however , controlling the significance level is a also serious concern .the results presented in this article can be used to control any characteristic of interest , although we will focus on the type i error in the sequel . the ( weighted ) dickey - fuller control chart studied in this articleis essentially based on a sequential version of the well - known dickey - fuller ( df ) unit root test , which is motivated by least squares . due to its power propertiesthis test is very popular , although it is known that its statistical properties strongly depend on a correct specification of the correlation structure of the innovation sequence .the df test and its asymptotic properties , particularly its non - standard limit distribution have been studied by white ( 1958 ) , fuller ( 1976 ) , rao ( 1978 , 1980 ) , dickey and fuller ( 1979 ) , and evans and savin ( 1981 ) , chan and wei ( 1987 , 1988 ) , phillips ( 1987 ) , among others .we will generalize some of these results . to ensure quicker detection in case of a change to stationarity, we modify the df statistic by introducing kernel weights to attach small weights to summands corresponding to past observations .we provide the asymptotic theory for the related dickey - fuller ( df type ) processes and stopping times , also covering local - to - unity alternatives. for correlated error terms the asymptotic distribution of the df test statistic , and hence the control limit of a monitoring procedure , depends on a nuisance parameter , which can be estimated by newey - west type estimators .we consider two approaches to deal with that problem .firstly , based on a consistent estimate of the nuisance parameter one may take the asymptotic control limit corresponding to the estimated value .secondly , following phillips ( 1987 ) one may consider appropriate transformations of the processes possessing limit distributions which no longer dependent on the nuisance parameter .a nonparametric approach called kpss test which avoids this problem , at least for i(1 ) processes , has been proposed by kwiatkowski et al .that unit root test has better type i error accuracy , but tends to be less powerful .monitoring procedures related to this approach and their merits have been studied in detail in steland ( 2006 ) .the organisation of the paper is as follows . in section [ secmodel ]we explain and motivate carefully our assumptions on the time series model , and present the class of dickey - fuller type processes and related stopping times .the asymptotic distribution theory under the null hypothesis of a random walk is provided in section [ ash0 ] .section [ ash1 ] studies local - to - unity asymptotics , where the asymptotic distribution is driven by an ornstein - uhlenbeck process instead of the brownian motion appearing in the unit root case . finally , in section [ secsim ] we compare the methods by simulations .our results work under quite general nonparametric assumptions allowing for dependencies and conditional heteroskedasticity ( garch effects ) , thus providing a nonparametric view on the parametrically motivated approach . to motivate our assumptions , let us consider the following common time series model , which is often used in applications .suppose at this end that is an ar( ) time series , i.e. , for starting values , where are i.i.d .error terms ( innovations ) with and , . assume the characteristic polynomial has a unit root , i.e. , , of multiplicity , and all other roots are outside the unit circle , i.e. , implies .then for some polynomial with has no roots in the unit circle implying that exists for all .we obtain , where denotes the lag operator .since can be inverted , we have the representation for coefficients this means , satisfies an ar( ) model with correlated errors . for the calculation of refer to brockwell and davis ( 1991 , sec . 3.3 . ) in particular , to analyze an ar( ) series for a unit root , one can work with an ar( ) model with correlated errors .the representation ( [ arpasar1 ] ) motivates the following time series framework which will be assumed in the sequel .suppose we are given an univariate time series satisfying where ] of all cadlag functions equipped with the skorokhod metric .the assumption that satisfies an invariance principle can be regarded as a nonparametric definition of the property ensuring that the partial sums converge weakly to a ( scaled ) brownian motion . fora parametrically oriented definition see stock ( 1994 ) .particularly , the scale parameter is given by also introduce the notations if the are uncorrelated , we have , and . as a non - trivial example for processes satisfying ( e1 )let us consider arch processes .a time series satisfies arch( ) equations , if there exists a sequence of i.i.d .non - negative random variables , , such that where , , this model is often applied to model conditional heteroscedasticity of an uncorrelated sequence with for all , by putting .a common choice for is to assume that the are i.i.d . with common standard normal distribution . in giraitiset al . ( 2003 ) it has been shown that an unique and strictly stationary solution exists and satisfies , if in addition , under these conditions the functional central limit theorem ( [ fclt2 ] ) holds . the rate of decay of the coefficients controls the asymptotic behavior of . if for some and we have , , then there exists such that for .thus , depending on the rate of decay ( e2 ) may also holds .assumption ( e2 ) will be used to verify a tightness criterion . combined with appropriate moment conditions it implies the invariance principles ( [ fclt1 ] ) and ( [ fclt2 ] ) .we will now introduce the class of dickey - fuller processes and related detection procedures . recall that the least squares estimator of the parameter in model ( [ armodel ] ) is given by to test the null hypothesis , one forms the dickey - fuller ( df ) test statistic suppose at this point that the are uncorrelated . provided , as . however , has a different convergence rate and a non - normal limit distribution , if .it is known that as , see white ( 1958 ) , fuller ( 1976 ) , rao ( 1978 , 1980 ) , dickey and fuller ( 1979 ) , and evans and savin ( 1981 ) . recall that denotes standard brownian motion .based on that result one can construct a statistical level test , which rejects the null hypothesis of a unit root against the alternative if , where the critical value is the -quantile of the distribution of . more generally , we want to construct a detection rule which provides a signal if there is some change - point such that form a random walk ( unit root process ) , and form an with dependent innovations .this means , the alternative hypothesis is , where , , specifies that where .however , for the calculation of the detection rule to be introduced now knowledge of a specific alternative hypothesis is not required .a naive approach to monitor a time series to check for deviations from the unit root hypothesis is to apply the df statistic at each time point using the most recent observations .a more sophisticated version of this idea is to modify the df statistic to ensure that summands in the numerator have small weight if their time distance to the current time point is large . to define such a detection rule ,let us introduce the following sequential kernel - weighted dickey - fuller ( df ) process ,\ ] ] where . here and in the following we put for convenience .note that plays the role of the current time point .the non - negative smoothing kernel is used to attach smaller weights to summands from the distant past to avoid that such summands dominate the sum .thus , kernels ensuring that , , is decreasing are appropriate , but that property is not required .we do not use kernel weights in the denominator , since it is used to estimate a nuisance parameter .we will require the following regularity conditions for .* , and .* is with bounded derivative .* has bounded variation .note that it is not required to use a kernel with compact support .the parameter is used as a scaling constant in the kernel and defines the memory of the procedure .for instance , if if ] . hence ,if is chosen from the asymptotic distribution via ( [ aslimitalpha ] ) , is a function of .therefore , the basic idea is to estimate at each time point using only past and current data , and to use the corresponding limit .our estimator for will be based on a newey - west type estimator , thus circumventing the problem to specify the short memory dynamics of the process explicitly .let and denote by , , the autocorrelation function of the time series . since if , we can estimate and under the null hypothesis by the parameter can now be estimated by the newey - west estimator given by where are the bartlett weights and is a lag truncation parameter , see newey and west ( 1987 ) . andrews ( 1991 ) studies more general weighting functions and shows that the rate is sufficient for consistency .the dickey - fuller control chart for correlated time series works now as follows . at each time point we estimate by and calculate the corresponding estimated control limit .a signal is given if is less than the estimated control limit , i.e. , we use the rule alternatively , one may use a transformation of , namely .\ ] ] it seems that this transformation idea dates back to phillips ( 1987 ) .we will show that for arbitrary the process converges weakly to the limit of for .consequently , if denotes the control limit ensuring that has size when , then the detection rule has asymptotic size for any . in the next section we shall show that both procedures are asymptotically valid .inference on the ar parameter in the unit root case is often based on the -statistic associated with , which gives rise to dickey - fuller -processes .the dickey - fuller -statistic , , associated with , is the standard computer output quantity when running a regression of on . for a sample , the statistic is defined as where with .the formula for motivates to scale analogously .hence , let us define the weighted -type df process by ,\ ] ] and . is a weighted version of calculated using the observations , and attaching kernel weights to the summand in the numerator .the associated detection rule for known is defined as with such that .again , it turns out that the asymptotic limit of depends on the nuisance parameter .the weighted -type df control chart with estimated control limits is defined as alternatively , one can transform the process to achieve that the asymptotic limit is invariant with respect to .we define .\ ] ] we will show that the detection rule has asymptotic type i error equal to for all .in this section we provide functional central limit theorems for the dickey - fuller processes defined in the previous section under a random walk model assumption corresponding to the null hypothesis in model ( [ armodel ] ) , and the related central limit theorem for the associated stopping rules .these results can be used to design tests and detection procedures having well - defined statistical properties under the null hypothesis .we start with the following functional central limit theorem providing the limit distribution of the weighted df process , },\ ] ] as , where the stochastic process ] -valued stochastic processes , , and are given by for ] yielding uniformly in ] , ] .next note that we are now in a position to verify joint weak convergence of numerator and denominator of .the lipschitz continuity of ensures that up to terms of order for all the linear combination is a functional of , and that functional is continuous .therefore , the continuous mapping theorem ( cmt ) entails weak convergence to the stochastic process \\ & & \quad + \lambda_2 \frac{\eta^2}{s^2 } \int_0^s b(r)^2 \ , dr.\end{aligned}\ ] ] this verifies joint weak convergence of .hence , the result follows by the cmt .( k2 ) also ensures that ] a.s .for brevity we omit the details .let us now show consistency of the detection procedure , which uses estimated control limits .[ thestcl ] assume ( e1 ) and ( e2 ) , ( k1)-(k3 ) , and in addition that the lag truncation parameter , , of the newey - west estimator satisfies then the weighted dickey - fuller type control chart with estimated control limit , , is consistent , i.e. , as . notethat the equivalence } d_t(s ) / c ( \widehat{\vartheta } _ { { { \lfloor ts \rfloor } } } ) \ge 1 ] , as .since } { \mathcal{d}}_{\vartheta^*}(s ) ] to } { \mathcal{d}}_{\vartheta^*}(s ) \le z ) ] . since for each ] , which is a functional of . clearly , by the cauchy - schwarz inequality and ( e1 ) for some .further , since and , the mixing coefficients of satisfy where are the mixing coefficients of . due to ( e1 ) we can apply yokohama ( 1980 , th.1 ) with to conclude that for now the decomposition and the triangle inequality yield since , firstly , we may assume , and , secondly , both and are bounded away from and for . consequently , and therefore vaart and wellner ( 1986 , ex .2.2.3 ) implies tightness of the process \ } ] is tight in the product space , which implies weak convergence of \ } ] , as . due to( [ wctheta ] ) we can conclude that in the product space ) ^2 ] given by } \frac{x(s)}{c(y(s ) ) } , \qquad x , y \in d[\kappa,1 ] , \ y\in { \mathbb{r } } , \ ] ] is continuous in all )^2 ] w.p . and , ( [ wc2 ] ) follows .it remains to provide the related weak convergence results for the transformed process and its natural detection rule .assume ( e1),(e2 ) , and ( k1)-(k3 ) .additionally assume that the lag truncation parameter , , of the newey - west estimator satisfies then , ,d)}\ ] ] as , where for ] .note that for hence , we obtain from the proof of theorem [ thbasic ] we know that } { { \lfloor ts \rfloor } } ^{-2 } \sum_{t=1}^ { { { \lfloor ts \rfloor } } } y_{t-1}^2 = \sup_{s \in ( 0,1 ] } \left ( \frac { t } { { { \lfloor ts \rfloor } } } \right)^2 \int_0^s ( t^{-1/2 } y _ { { { \lfloor tr \rfloor } } } ) ^2 \ , dr = o_p(1)\ ] ] and } \left| { { \lfloor ts \rfloor } } ^{-1 } \sum_{t=1}^ { { { \lfloor ts \rfloor } } } \epsilon_t y_{t-1 } \right| \le \sup_{s \in ( 0,1 ] } \left| { { \lfloor ts \rfloor } } ^{-1/2 } \sum_{t=1}^ { { { \lfloor ts \rfloor } } } \epsilon_t \right| \sup_{s \in ( 0,1 ] } | { { \lfloor ts \rfloor } } ^{-1/2 } y _ { { { \lfloor ts \rfloor } } } | = o_p(1).\ ] ] combining these facts with } { { \lfloor ts \rfloor } } | \widehat{\rho } _ { { { \lfloor ts \rfloor } } } - 1 | = o_p(1 ) ] . because ( e1 ) implies that we may apply the law of large numbers for time series ( brockwell and davis ( 1991 ) , th .7.1.1 ) and obtain , since stochastic convergence to a constant yields stochastic convergence in the skorokhod topology , as . we shall now show joint weak convergence of , ] .assume ( e1),(e2 ) , ( k1)-(k3 ) , and additionally that the lag truncation parameter of the newey - west estimator satisfies then the -type weighted dickey - fuller control chart with estimated control limits , , is consistent , i.e. , as . the result is shown along the lines of the proof of theorem [ thestcl ] , since the process is continuous w.p . , and is a continuous function of . finally , for the transformed process and the associated control chart we have the following result .assume ( e1),(e2 ) , ( k1)-(k3 ) , and then the transformed -type weighted df process , defined in ( [ deftranst ] ) , converges weakly , ,d) ] .hence , the construction of the correction term is as for .in econometric applications , the stationary alternatives of interest are often of the form with small .to mimic this situation asymptotically , we consider a local - to - unity model where the ar parameter depends on and tends to , as the time horizon increases . the functional central limit theorem given below shows that the asymptotic distribution under local - to - unity alternatives is also affected by the nuisance parameter .however , the term which depends on the parameter parameterising the local alternative does not depend on ( or ) . therefore ,if one takes the nuisance parameter into account when designing a detection procedure , we obtain local asymptotic power .let us assume that we are given an array of observations satisfying where the sequence of ar parameters is given by for some constant . is a mean - zero stationary i(0 ) process satisfying ( e1 ) . for brevity of notation in this section the process ( [ defdt ] ) with replaced by . the limit distribution will be driven by an ornstein - uhlenbeck process .recall that the ornstein - uhlenbeck process with parameter is defined by ,\ ] ] where denotes brownian motion .[ thlocalunity ] assume ( e1 ) , and ( k1)-(k3 ) . under the local - to - unity model ( [ localtounity ] ) we have for the weighted dickey - fuller process as , where the a.s . ] , and . here denotes the ornstein - uhlenbeck process defined in ( [ defou ] ) .further , : { \mathcal{d}}_\vartheta^a(s ) < c \ } , \qquad \mbox{as } .\ ] ] the crucial arguments to obtain _ joint _ weak convergence of numerator and denominator of have been given in detail in the proof of theorem [ thbasic ] .therefore , we give only a sketch of the proof stressing the essential differences .first , note that for the step function , ] the term can be treated as in the proof of theorem [ thbasic ] , namely , from the proof of theorem [ thbasic ] we know that due to ( e1 ) as .consider now . by definition of obtain where due to ( k2 ) the term is uniform in ] & ] & ] + & & & & & + & & & & & + & ] & ] & ] & ] & ] + + + & & & & & + & & & & & + & & & & & + & ] & ] & ] & ] & ] + & & & & & + & & & & & + & ] & ] & ] & ] & ] + & & & & & + & & & & & + & ] & ] & ] & ] & ] + + + & & & & & + & & & & & + & & & & & + & ] & ] & ] & ] & ] + & & & & & + & & & & & + & ] & ] & $ ] +the support of deutsche forschungsgemeinschaft ( sfb 475 , _ reduction of complexity in multivariate data structures _ ) is gratefully acknowledged .i thank dipl .- math .sabine teller for proof - reading .kwiatkowski , d. , phillips , p.c.b . ,schmidt , p. , and shin , y. ( 1992 ) .testing the null hypothesis of stationary against the alternative of a unit root : how sure are we that economic time series have a unit root ?_ journal of econometrics _ , * 54 * , 159 - 178 . | aiming at monitoring a time series to detect stationarity as soon as possible , we introduce monitoring procedures based on kernel - weighted sequential dickey - fuller ( df ) processes , and related stopping times , which may be called weighted dickey - fuller control charts . under rather weak assumptions , ( functional ) central limit theorems are established under the unit root null hypothesis and local - to - unity alternatives . for general dependent and heterogeneous innovation sequences the limit processes depend on a nuisance parameter . in this case of practical interest , one can use estimated control limits obtained from the estimated asymptotic law . another easy - to - use approach is to transform the df processes to obtain limit laws which are invariant with respect to the nuisance parameter . we provide asymptotic theory for both approaches and compare their statistical behavior in finite samples by simulation . + * keywords : * autoregressive unit root , change point , control chart , nonparametric smoothing , sequential analysis , robustness . weighted dickey - fuller processes for detecting stationarity ansgar steland + institute of statistics , + rwth aachen university , germany + steland.rwth-aachen.de |
application of particle induced x - ray emission ( pixe ) to non - destructive trace element analysis of materials has first been proposed by johansson and co - workers in 1970 .today , this experimental technique is widely exploited in diverse fields . the physical process of pixe may also give rise to unwanted instrumental background x - ray lines , as is the case for space missions and for some laboratory environments .it also affects the spatial distribution of the energy deposit associated with the passage of charged particles in matter : in this respect , its effects may become significant in the domain of microdosimetry .the wide application of this experimental technique has motivated the development of several dedicated software systems ; nevertheless , despite its large experimental interest , limited functionality for pixe simulation is available in general - purpose monte carlo codes .this paper discusses the problem of simulating pixe in the context of a general - purpose monte carlo system and describes a set of developments to improve its simulation with geant4 .finally , it illustrates an application of the developed pixe simulation prototype to the optimizationof the passive shield of the x - ray detectors of the erosita ( extended roentgen survey with an imaging telescope array ) telescope on the _ spectrum - x - gamma _ space mission .software tools are available in support of pixe experimental applications as specialized codes or included in general - purpose simulation systems .dedicated pixe codes are focussed on the application of this technique to elemental analysis .they are concerned with the calculation of experimentally relevant x - ray yields resulting from the irradiation of a material sample by an ion beam : primarily transitions concerning the k shell , and in second instance transitions originating from vacancies in the l shell . for this purpose various analysis programshave been developed , which are able to solve the inverse problem of determining the composition of the sample from an iterative fitting of a pixe spectrum ; some among them are geopixe , gupix , pixan , pixeklm , sapix , winaxil and wits - hex .a few codes concern pixe simulation specifically .these codes share basic physics modelling options , like the adoption of the ecpssr ( energy - loss coulomb - repulsion perturbed stationary state relativistic ) model for the calculation of ionization cross sections ; they handle simple experimental geometries , such as target materials consisting of layers , and impose limitations on the type of samples that can be analyzed .dedicated pixe software systems have a limited application scope , as they lack the capability of dealing with complex experimental configurations .comprehensive modelling capabilities are usually associated with general - purpose monte carlo systems .however , the simulation of pixe is not widely covered by large scale monte carlo codes treating hadron interactions , while the conceptually similar simulation of electron impact ionisation is implemented in the egs and penelope electron - photon monte carlo systems .the geant4 simulation toolkit addresses x - ray emission induced both by electrons and heavy particles like protons and particles .the physics that needs to be considered for the simulation of pixe includes the energy loss and scattering of the incident charged particle , atomic shell ionization cross sections , and atomic transition probabilities and energies . on top of these physics features ,consistency should be ensured , when modelling pixe , with the particle transport schemes governing the monte carlo simulation .intrinsically , pixe is a discrete process : x - ray emission occurs as the result of producing a vacancy in atomic shell occupancy , in competition with auger electron emission and coster - kronig transitions .nevertheless , this discrete process is intertwined with the ionization process , which determines the production of the vacancy ; this process , for reasons which are elucidated below , is treated in general - purpose monte carlo codes with mixed condensed and discrete transport schemes .the transport scheme to which ionization is subject affects the simulation of pixe .the simulation of the energy loss of a charged particle due to ionization is affected by the infrared divergence of the cross section for producing secondary electrons . in the context of general purpose monte carlo systems ,a discrete treatment of each individual ionization process becomes inhibitive , since , due to their large number , the required computation time becomes excessive .infrared divergence is usually handled in general - purpose monte carlo codes by adopting a condensed - random - walk scheme for particle transport .in such a scheme , the particle s energy loss and deflection are treated as averaged net effect of many discrete interactions along the step , thereby substituting in the simulation a single continuous process for the many discrete processes that actually occur . in a mixed scheme , like the one adopted by geant4 ,two different rgimes of particle transport are introduced , which are distinguished through a secondary production threshold setting , i.e. a threshold for the kinetic energy of the electron that is kicked out of an atom as a result of ionization ( the so - called -ray ) : all ionizations that would generate -rays below the threshold are treated as a continuous process along the step , while the ionizations that produce -rays above the threshold are treated as a discrete process .while this combined condensed - random - walk and discrete particle transport scheme is conceptually appealing and appropriate to many simulation applications , it suffers from drawbacks with respect to the generation of pixe .one drawback is that atomic relaxation occurs only in connection with the discrete part of the transport scheme , where the event of producing a -ray can be associated with the creation of a vacancy in the shell occupancy .for the same reason , the fluorescence yield depends on the threshold for the kinetic energy of the secondary electron .another drawback of the current transport scheme is that the cross section for discrete hadron ionization , i.e. for production of a -ray , is calculated from a model for energy loss that is independent of the shell where the ionization occurs . while theoretical calculations are available to determine the spectrum of the emitted electron for each sub - shell in the case of electron impact ionization for any element , to the authors knowledge no such facilities are currently available for the ionization induced by protons and ions .experimental data are not adequate either to complement the lack of theoretical calculations .at the present time , the geant4 toolkit does not provide adequate capabilities for the simulation of pixe in realistic experimental use cases .the first development cycle had a limited scope : the implementations concerned only pixe induced by particles and involving k shell ionization - apart from the implementation based on gryzinski s theoretical model , which produces physically incorrect results .even with the models which calculate k shell ionization cross sections correctly , inconsistencies arise in the simulation of pixe related to the algorithm implemented for determining the production of a k shell vacancy .the deficiencies exhibited by the software released in geant4 9.2 did not contribute to improve pixe simulation with respect to the previous version .a set of developments for pixe simulation is described in the following sections .a preliminary overview of their progress was reported in .the physics aspects associated with pixe involve the creation of a vacancy in the shell occupancy due to ionization , and the emission of x - rays from the following atomic deexcitation .the former requires the knowledge of ionization cross sections detailed at the level of individual shells : for this purpose several models have been implemented and validated against experimental data .the latter exploits the existing functionality of geant4 atomic relaxation package .the domain decomposition at the basis of pixe simulation with geant4 identified three main entities with associated responsibilities : the hadron ionization process , the creation of a vacancy in the shell occupancy resulting from ionisation , the deexcitation of the ionised atom with the associated generation of x - rays .the simulation of pixe is the result of the collaboration of these entities .a class diagram in the unified modelling language ( uml ) illustrates the main features of the software design in fig .[ uml_fig ] . the simulation of pixe concerns a variety of experimental applications , that require the capability of calculating ionisation cross sections over an extended energy range : from a few mev typical of material analysis applications to hundreds mev or gev range of astrophysical applications .various theoretical and empirical models are available in literature to describe ionisation cross sections for different interacting particles , as well as compilations of experimental data . however , there is limited documentation in literature of systematic , quantitative assessments of the accuracy of the various models .the current software prototype has adopted the strategy of providing an extensive collection of ionisation cross section models as a function of element , atomic ( sub-)shell , and incident particle kinetic energy . according to the chosen strategy ,the provision of a cross section model is reduced to the construction of tabulations of its values at preset energies .the cross sections associated with the models described in this paper have been pre - calculated either using existing software documented in literature , or developing ad hoc code . the data are stored in files , which make up a data library for pixe simulation with geant4 - but could be used also by other codes .the cross section data sets selected by the user are loaded into memory at runtime ; cross section values at a given energy are calculated by interpolation over the tabulated values whenever required .the adopted data - driven approach for the provision of ionization cross sections presents various advantages .it optimizes performance speed , since the calculation of the interpolation is faster than the calculation from complex algorithms implementing theoretical models .this approach also offers flexibility : chosing a cross section model simply amounts to reading the corresponding set of data files ; adding a new set of cross sections simply amounts to providing the corresponding set of data files , which are handled transparently .finally , the cross section data are transparent to the user : the files are accessible to the user and human readable . a wide choice of cross section models for k , l and m shell ionization is provided in the prototype software for protons and particles .the availability of ionization cross section calculations and experimental data for outer shells is very limited in literature .theoretical cross section models include plane wave born approximation ( pwba ) and variants of the ecpssr model : the original ecpssr formulation , ecpssr with united atom correction ( ecpssr - ua ) , ecpssr with corrections for the dirac - hartree - slater nature of the k shell ( ecpssr - hs ) , as well as calculations based on recent improvements to k shell cross section specific to high energy ( ecpssr - he ) .the cross sections have been tabulated and assembled in a data library ; the values at a given energy are calculated by interpolation .the tabulations corresponding to theoretical calculations span the energy range between 10 kev and 10 gev ; empirical models are tabulated consistently with their energy range of validity .the adopted data - driven approach optimizes performance speed and offers flexibility for chosing a cross section model .ecpssr tabulations have been produced using the isics software , 2006 version and an extended version including recent high energy developments .tabulations of ecpssr calculations as reported in are also provided .empirical cross section models for k shell ionization include the tabulations for protons documented in and a more recent one . an empirical cross section model for k shell ionization by particlesis based on the tabulations in .empirical models for l shell ionization by protons have been developed by miyagawa et al . , sow et al . and orlic et al . .the isics software allows the calculation of cross sections for heavier ions as well ; therefore , the current pixe simulation capabilities can be easily extended in future development cycles .the determination of which atomic ( sub-)shell is ionised is related to its ionisation cross section with respect to the total cross section for ionising the target atom . however, as previously discussed , the condensed - random - walk scheme raises an issue as to estimating the total ionisation cross section at a given energy of the incident particle .a different algorithm has been adopted with respect to the one implemented in the first development cycle : the vacancy in the shell occupancy is determined based on the total cross section calculated by summing all the individual shell ionisation cross sections .this algorithm provides a correct distribution of the produced vacancies as long as ionisation cross sections can be calculated for all the atomic shells involved in the atomic structure of the target element . sincecross section models are currently available for k , l and m shells only , at the present status of the software this algorithm overestimates pixe for elements whose atomic structure involves outer shells , because of the implicit underestimation of the total ionization cross section .this approach , however , provides better control on the simulation results than the algorithm implemented in the first development cycle .the production of secondary particles by the atomic relaxation of an ionized atom is delegated to the atomic relaxation component .the availability of a wide variety of cross section models for the first time in the same computational environment allowed a detailed comparative assessment of their features against experimental data .the comparison of cross sections as a function of energy was performed for each element by means of the test .contingency tables were built on the basis of the outcome of the test to determine the equivalent , or different behavior of model categories .the input to contingency tables derived from the results of the test : they were classified respectively as `` pass '' or `` fail '' according to whether the corresponding p - value was consistent with a 95% confidence level .the contingency tables were analyzed with fisher exact test .the reference experimental data were extracted from .an example of how the simulation models reproduce experimental measurements is shown in fig .[ fig_kexp ] . & kahoul + + 95% & 67 & 74 & 77 & 68 & 71 & 46 + 99% & 85 & 83 & 83 & 85 & 80 & 57 + + 95% & 69 & 75 & 86 & 69 & 70 & 48 + 99% & 83 & 81 & 91 & 83 & 80 & 56 + the fraction of test cases for which the test fails to reject the null hypothesis at the 95% and 99% confidence level are listed in table [ tab_kpass ] : all the cross section models implemented in the simulation exhibit equivalent behaviour regards the compatibility with the experimental data , with the exception of the kahoul et althe contingency table comparing the kahoul et al .and ecpssr - hs models confirms that the two models show a statistically significant difference regards their accuracy ( p - value of 0.001 ) .the contingency tables associated with the other models show that they are statistically equivalent regarding their accuracy .however , when only the lower energy range ( below 5 - 7 mev , depending on the atomic number ) is considered , a statistically significant difference at the 95% confidence level ( p - value of 0.034 ) is observed between the ecpssr model and the ecpssr - hs one ; the latter is more accurate with respect to experimental data . from this analysisone can conclude that the implemented k shell ionization cross section models exhibit a satisfactory accuracy with respect to experimental measurements .the cross sections for l sub - shell ionization cross sections were compared to the experimental data collected in two complementary compilations , . an example of how the simulation models reproduce experimental measurementsis shown in fig .[ fig_lexp ] .the same method was applied as described for the validation of k shell cross sections .the ecpssr model appears to provide a satisfactory representation of l shell ionisation cross sections with respect to experimental data , especially with its united atom variant .the ecpssr - ua exhibits the best overall accuracy among the various models ; the orlic et al .model exhibits the worst accuracy with respect to experimental data .this semi - empirical model is the only option implemented in geant4 9.2 for the calculation of l shell ionization cross sections .the accuracy of the various cross section models was studied by means of contingency tables to evaluate their differences quantitatively .the categorical analysis was performed between the ecpssr model with united atom correction , i.e. the model showing the best accuracy according to the results of the test , and the other cross section models .the contingency tables were built based on the results of the test at the 95% confidence level , summing the `` pass '' and `` fail '' outcome over the three sub - shells .the orlic et al .semi - empirical model is found to be significantly less accurate than the ecpssr - ua model : the hypothesis of equivalence of their accuracy with respect to experimental data is rejected at 99% confidence level .the p - values concerning the comparison of the miyagawa et al . empirical modelare close to the critical region for 95% confidence , and slightly different for the three tests performed on the related contingency table .the sow et al . empirical model and the ecpssr model in its originalformulation appears statistically equivalent in accuracy to the ecpssr model with united atom correction . as a result of this analysis , the ecpssr model with united atom approximationcan be recommended for usage in geant4-based simulation applications as the most accurate option for l shell ionization cross sections .the ecpssr model in its original formulation can be considered a satisfactory alternative ; the sow et al .empirical model has satisfactory accuracy , but limited applicability regards the target elements and proton energies it can handle .pixe as a technique for elemental analysis is usually performed with proton beams of a few mev . in the recent years, higher energy proton beams of a few tens mev have been effectively exploited too .hhigh energy protons are a source of pixe in the space radiation environment .the interest in high energy pixe has motivated recent theoretical investigations concerning cross section calculations at higher energies . despite the emerging interest of high energy pixe ,only a limited set of experimental data is available above the energy range of conventional pixe techniques .the accuracy of the implemented k shell cross section models was evaluated against two sets of measurements at higher energy , respectively at 66 and 68 mev . the experimental measurement with uranium was not included in the comparison , since it appears affected by some experimental systematics .the test was performed first separately on either experimental data set to evaluate the possible presence of any systematics in the two test cases , then on the combined data set .the p - values from the test against these experimental data are listed in table [ tab_hepixe ] . over the limited data sample considered in this test, the ecpssr model with the correction in model does not appear to provide better accuracy than the original ecpssr formulation ; nevertheless more high energy experimental data would be required to reach a firm conclusion .also , this analysis should be verified over tabulation deriving from a published version of the isics code , when it becomes available . the test over the experimental data at 160 mev collected in results in p - values less than 0.001 for all the ecpssr model variants .the rejection of the null hypothesis could be ascribed either to the deficiency of the theory or to systematic effects affecting the measurements ; further data would be required for a sound assessment . .p - values from the test concerning high energy experimental data [ cols="<,^,^,^,^ " , ]the prototype components for pixe simulation described in the previous sections were applied to a study of the passive , graded z shielding of the x - ray detectors of the erosita telescope on the upcoming russian _spectrum - x - gamma _ space mission .the purpose of the passive shielding is two - fold .firstly , the passive shielding prevents abundant low - energy cosmic - ray particles from reaching the detectors and thus from causing radiation damage .secondly , the passive graded z shielding serves to reduce instrumental background noise to a minimum .this background noise consists of both fluorescence lines and continuum background due to bremsstrahlung photons and rays from cosmic - ray interactions .the event timing accuracy of current imaging si detectors for x - ray astronomy ( photon energy range kev ) is limited by the signal integration time of these devices . for such detectors ,an active anti - coincidence system can not be used for background reduction because discarding complete readout frames correlated with an anti - coincidence signal would result in unacceptable dead time .detector triggers due to primary cosmic - ray particles can be discriminated in imaging detectors due to their high energy deposit and their pixel image pattern .however , interactions of primary cosmic - ray particles in the detector and surrounding passive material give rise to secondary x - rays and charged particles .these may in turn lead to detector triggers within the accepted energy range .such triggers contribute to the instrumental background noise because they can not be distinguished from valid events due to cosmic x - ray photons that were focused by the telescope mirror system onto the detector .the production of secondary photons and particles by the ubiquitous cosmic radiation is inevitable , but graded z shielding permits the shifting of the energy of secondaries from atomic deexcitation to low values .the probability that an atomic shell vacancy is filled by radiative transitions ( emission of fluorescence x - ray photons ) decreases with decreasing atomic charge number z ; by contrast , the probability for non - radiative transitions ( emission of auger and coster - kronig electrons ) increases .the average energy of secondaries from atomic deexcitation decreases with decreasing charge number z. therefore , in a graded z shield cosmic - ray induced fluorescence x rays produced in an outer , higher z layer of the shield are absorbed in an inner , lower z layer .subsequent atomic deexcitation in this inner layer gives rise to fluorescence photons and auger electrons with energies that are lower than the energies of the deexcitation particles from the outer layer ; in addition , there will be relatively more deexcitation electrons than x - rays .if the effective charge number z of the innermost shield layer is sufficiently low , ionization results in the generation of mainly auger electrons with energies below 1 kev , which can easily be stopped in a thin passivation layer on top of the detectors . ionization can also create rare fluorescence x rays of similarly low energy .a first set of graded z shield designs was studied by monte carlo simulation , using the pixe prototype software together with geant4 versions 9.1-patch 01 .the detector chip was modelled as a 450 m thick square slab of pure si with dimensions 5.6 cm 3.7 cm. the sensitive detector , which is integrated into the chip , covers an area of 2.88 cm 2.88 cm or 384 384 square pixels of size 75 m m .this detector model was placed inside a box - shaped shield ; figures of the actual design can be found in . in its simplest form ,the passive shield consisted only of a single 3 cm thick layer of pure cu ; the outer dimensions of the shield were 12.7 cm 10.8 cm 7.1 cm .a second version of the graded z shield had a 1 mm thick al layer inside the cu layer , and a third version in addition a 1 mm thick b layer inside the al layer .the physics configuration in the simulation application involved the library - based processes of the geant4 low energy electromagnetic package for electrons and photons , along with the improved version of the hadron ionisation process and the specific pixe software described in section [ sec_newpixe ] . among the ionization cross section models ,the ecpssr ones were selected for k , l and m shells ._ spectrum - x - gamma _ is expected be launched in 2012 into an l2 orbit .the background spectra due to cosmic - ray protons were simulated for the three different erosita graded z shield designs . a model for the detector response , taking into account fano statistics ( and hence the energy resolution ) and detector noise ,was then applied to obtain a simulated data sets .these simulated data were further processed with specialized data analysis algorithms .a comparison of the results is depicted in fig .[ comp_shield ] .the spectra represent the average background in a detector pixel .qualitatively , the pixe prototype implementation is working properly : protons produce the expected fluorescence lines with correct relative strengths . in case of a pure cu shield , shown in fig .[ comp_cu - shield ] , strong cu k and k fluorescence lines at about 8.0 and 8.9 kev are present in the background spectrum . in case of a combined cu - al shield , depicted in fig .[ comp_cu - al - shield ] , the cu fluorescence lines are absorbed in al , but ionization in al gives rise to a clear al k fluorescence line at about 1.5 kev . in case of the full graded z shield configuration , shown in fig .[ comp_cu - al - b4c - shield ] , the b layer absorbs the al line , but at the same is not a source of significant fluorescence lines , which is expected due to the low fluorescence yield of these light elements .except for an excess below 0.3 kev for the case of the cu - al - b graded z shield , which appears because the simulated detector model does not yet include a thin passivation layer , there is no significant difference in the continuum background for the three different designs .the inclusion of a thin layer for the treatment of the low energy background will be the object of a further detector design optimization .this application demonstrates that the developed software is capable of supporting concrete experimental studies .nevertheless , the concerns outlined in sections [ sec_transport ] and [ sec_vacancy ] should be kept in mind : while the present pixe simulation component can provide valuable information in terms of relative fluorescence yields from inner shells , the intrinsic limitations of the mixed transport scheme in which ionization is modelled and the lack of cross section calculations for outer shells prevent an analysis of the simulation outcome in absolute terms . .this paper presents a brief overview of the status , open issues and recent developments of pixe simulation with geant4 ; a more extensive report of the underlying concepts , developments and results is available in .these new developments represent a significant step forward regards pixe simulation with geant4 .they extend the capabilities of the toolkit by enabling the generation of pixe associated with k , l and m shells for protons and particles ; for this purpose a variety of cross section models are provided . the adopted data - driven strategy and the software designimprove the computational performance over previous geant4 models .the validity of the implemented models has been quantitatively estimated with respect to experimental data .an extensive ionisation cross section data library has been created as a by - product of the development process : it can be of interest to the experimental community for a variety of applications , not necessarily limited to pixe simulation with geant4 .some issues identified in the course of the development process are still open : they concern the consistency of pixe simulation in a mixed condensed - discrete particle transport scheme . in parallel ,a project is in progress to address design issues concerning co - working condensed and discrete transport methods in a general purpose simulation system . despite the known limitations related to mixed transport schemes , the software developments described in this paper provide sufficient functionality for realistic experimental investigationsthe authors express their gratitude to a. zucchiatti for valuable discussions on pixe experimental techniques and to s. cipolla for providing a prototype version of isics including updates for high energy k shell cross sections .the authors are grateful to the rsicc staff , in particular b. l. kirk and j. b. manneschmidt , for the support to assemble a ionization cross section data library resulting from the developments described in this paper .j. baro et al . , `` penelope : an algorithm for monte carlo simulation of the penetration and energy loss of electrons and positrons in matter '' _ nucl .b _ , vol .31 - 46 , 2005 .m. j. berger , `` monte carlo calculation of the penetration and diffusion of fast charged particles '' , in _ methods in computational physics _ , vol .1 , ed . b. alder , s. fernbach and m. rotenberg , academic press , new york , pp .135 - 215 , 1963 . s. t. perkins , d. e. cullen , and s. m. seltzer , `` tables and graphs of electron - interaction cross sections from 10 ev to 100 gev derived from the llnl evaluated electron data library ( eedl ) '' , ucrl-50400 vol . 31 , 1997 . s. j. cipolla , `` the united atom approximation option in the isics program to calculate k , l , and m - shell cross sections from pwba and ecpssr theory '' , _ nucl .b _ , vol . 261 , pp . 142 - 144 , 2007 . s. j. cipolla , `` an improved version of isics : a program for calculating k , l , and m - shell cross sections from pwba and ecpssr theory using a personal computer '' , _ comp .157 - 159 , 2007 .y. miyagawa et al ., `` analytical formulas for ionization cross sections and coster - kronig corrected fluorescence yields of the ll , l2 , and l3 subshells '' , _ nucl .b _ , vol .115 - 122 , 1988 .n. meidinger et al ., `` erosita camera design and first performance measurements with ccds '' , in `` space telescopes and instrumentation 2008 : ultraviolet to gamma ray '' , proc .spie , vol .7011 , 70110j , 2008 . | particle induced x - ray emission ( pixe ) is an important physical effect that is not yet adequately modelled in geant4 . this paper provides a critical analysis of the problem domain associated with pixe simulation and describes a set of software developments to improve pixe simulation with geant4 . the capabilities of the developed software prototype are illustrated and applied to a study of the passive shielding of the x - ray detectors of the german erosita telescope on the upcoming russian _ spectrum - x - gamma _ space mission . monte carlo , geant4 , pixe , ionization . |
the penetration of radiation into an optically thick distribution of gas is a feature of many astrophysical systems , ranging from scales as small as those of circumstellar disks to those as large as the damped lyman- absorbers observed along the sightlines to many quasars .numerical modelling of the propagation of radiation through the gas can greatly aid our efforts to understand the astrophysics of these systems , but frequently proves to be computationally challenging , owing to the high dimensionality of the problem . in the common case in which we have no useful spatial symmetries to exploit and wish to solve for the properties of the radiation field within different frequency bins ,the computational cost of determining the full spatial and angular distribution of the radiation field is of order , where is the number of resolution elements ( e.g. grid cells in an eulerian simulation , or particles in a smoothed particle hydrodynamics [ sph ] model ) , and where we have assumed that the desired angular resolution is comparable to the spatial resolution . for static problems , where the gas distribution is fixed and we need only to solve for the properties of the radiation field at a single point in time , it is currently possible to solve the full radiative transfer problem numerically even for relatively large values of ( see e.g. * ? ? ?* who post - process the results of an sph simulation with ) . however , if one is interested in dynamical problems , where the gas distribution is not fixed and the gas and radiation significantly influence one another , then the cost of solving for the radiation field after every single hydrodynamical timestep can easily become prohibitively large ( for a detailed discussion , see ) .for this reason , it is useful to look for simpler , more approximate techniques for treating the radiation that have a much lower computational cost , and that can therefore be used within hydrodynamical simulations without rendering these simulations overly expensive .one common simplification that nevertheless has a reasonably broad range of applicability is to ignore the re - emission of incident radiation within the gas . making this simplification means that rather than solving the full transfer equation , along multiple rays through the gas , where is the specific intensity at a frequency , and are the emissivity and opacity at the same frequency , and is the path length along the ray, one can instead solve the simpler equation , equation [ full_rt ] has the formal solution where is the specific intensity of the radiation field at the start of the ray ( e.g. at the edge of a gas cloud ) , is the source function , and is the optical depth along the ray .if and the optical depth is not too large , then it is reasonable to neglect the integral term , in which case we can write as which is the formal solution to equation [ simple_rt ] . by making this approximation, we therefore reduce the problem to one of determining optical depths along a large number of rays .often , this problem can then be further reduced to one of determining the column density of some absorber ( e.g. dust ) along each ray .unfortunately , although these simplifications make the problem easier to handle numerically , they do not go far enough , as the most obvious technique for calculating the column densities integrating along each ray still has a computational cost that scales as and hence is impractical in large simulations .this motivates one to look for computationally cheaper methods for determining the angular distribution of column densities seen by each resolution element within a large numerical simulation . in this paper, we introduce a computationally cheap and acceptably accurate method for computing these column densities , suitable for use within simulations of self - gravitating gas that utilize a tree - based solver for calculating gravitational forces .our method , which we dub , makes use of the large amount of information on the density distribution of the gas that is already stored within the tree structure to accelerate the calculation of the required column density distributions . in the next section ,we give a description of how our algorithm works , starting with a overview of how tree - based gravity solvers work in section [ overview ] , and then showing how it is possible to implement the method in section [ implement ] .we then present two stringent tests of the algorithm in section [ tests ] , both of which are typical of the conditions found in contemporary simulations of star formation .we discuss some of the potential applications of the method in section [ applications ] . in section [ performance ] , we give an overview of the computational efficiency of this scheme , and we summarise this paper in section [ summary ] .schematic diagram showing how the tree is constructed and used for the gravitational force calculation .a 3d oct - tree splits each parent node into eight daughter nodes , but in this 2d representation , we show only four of these nodes .the black lines show the boundaries of the tree nodes that would be constructed for the given ensemble of particles , shown as blue dots .the regions shaded in red denote the nodes that would be used to calculate the gravitational force as seen by the large blue and orange particle at the bottom of the diagram .note that in the case where the nodes being used contain only one particle ( a ` leaf ' node ) , the position of the particle itself is used to calculate the gravitational force arising from that node.,width=249 ] schematic diagram illustrating the concept . during the treewalk to obtain the gravitational forces , the projected column densities of the tree nodes ( the boxes shown on the right ) are mapped onto a spherical grid surrounding the particle for which the forces are being computed ( the `` target '' particle , shown on the left ) .the tree already stores all of the information necessary to compute the column density of each node , the position of the node in the plane of the sky of the target particle , and the angular extent of the node .this information is used to compute the column density map at the same time that the tree is being walked to calculate the gravitational forces . provided that the tree is already employed for the gravity calculation , the information required to create the steradian map of the column densities can be obtained for minimal computational cost . ]tree - based gravity solvers ( e.g. ) have long been a standard feature of -body and smoothed particle hydrodynamics codes ( e.g. ) .more recently , their accuracy and speed has also seen them adopted in grid - based codes . in this paper , we describe a method whereby the information stored in the gravitational tree can be used to construct a steradian map of the column density . by constructing this map at the same timeas the tree is being `` walked '' to determine the gravitational forces , we can minimize the amount of additional communication necessary between cpus holding different portions of the tree . since the structure of the tree , and how it is walked , will be important for our discussion, we will first give a brief overview of how a tree - based gravity solver works .for the purpose of this discussion , we consider a solver based on an oct - tree , as used in e.g. the gadget sph code , although we note that solvers based on other tree structures , such as binary trees , do exist ( e.g. the binary tree employed by , which later found its way into other high profile studies , such as and ) .a tree - based solver starts by constructing a tree , splitting the computational volume up into a series of nested boxes , or ` nodes ' .the ` root ' node is the largest in the hierarchy and contains all of the computational points in the simulation. this large ` parent ' node is then split up into eight smaller ` daughter ' nodes as shown in figure [ fig : tree ] .the daughter nodes are further refined ( becoming parents themselves ) until each tree node contains only one particle ( illustrated in figure [ fig : tree ] by the blue dots ) .these smallest nodes at the very bottom of the hierarchy are typically termed ` leaves ' . at each point in the hierarchy , the tree stores the information about the contents of the parent node ( including its position , mass and size ) that will be needed during the gravitational force calculation .once the construction of the tree is complete , each particle is located in a leaf node situated at the bottom of a nested hierarchy of other nodes .once the tree is built , it can then be `` walked '' to get the gravitational forces .the idea behind the speed - up offered by the tree gravity solver over direct summation is very simple : any region of structured mass that is far away can be well approximated as a single , unstructured object , since the distances to each point in the structure are essentially the same . strictly , this is only true if the angular size of the structure is small , and so tree - codes tend to adopt an angle , rather than a distance , for testing whether or not structures can be approximated .this angle is often referred to as the `` opening angle '' of the tree , and we will denote it hereafter as . to walk the tree to obtain the gravitational force on a given particle ,the algorithm starts at the root node and opens it up , testing whether the daughter nodes subtend an angle of less than .if the angle is smaller than , the properties of the daughter nodes ( mass , position , centre of mass ) are used to calculate their contribution to the force . as such, any substructure within the daughter nodes is ignored , and the mass inside in the nodes is assumed to be uniformly distributed within their boundaries .if one or more of these nodes subtends an angle larger than , the nodes are opened and the process is repeated on their daughter nodes , and so on , until nodes are found that appear smaller than . to increase the accuracy of the force calculation , the nodes often store multipole moments that account for the fact that the node is not a point mass , but rather a distributed object that subtends some finite angle ( e.g. see ) .these moments are calculated during the tree construction , for all levels of the node hierarchy except the leaves , since these are either well approximated as point masses as is the case for a stellar -body calculation or are sph particles , which have their own prescription for how they are distributed in space .the above method is sketched in figure [ fig : tree ] , which shows the tree structure in black , and the nodes , marked in red , that would be used to evaluate the gravitational force on the large blue particle with the orange highlight . in the cases where the nodes are leaves ( containing only a single particle ) , the position of the particle itself is used . as the total number of force calculationscan be substantially decreased in comparison to the number required when using direct summation , tree - based gravity solvers offer a considerable speed - up at the cost of a small diminution in accuracy . showed that for a distribution of self - gravitating particles , the computational cost of a tree - based solver scales as , compared to the scaling associated with direct summation .they also showed that the multipole moments allowed quite large opening angles , with values as large as 0.5 radians resulting in errors of less than a percent .our method makes use of the fact that each node in the tree stores the necessary properties for constructing a column density map .the mass and size of the node can be used to calculate the column density of the node , and its position and apparent angular size allow us to determine the region on the sky that is covered by the node .note also that column density , just like the total gravitational force , is a simple sum over the contributing material , meaning that it is independent of the order in which the contributions are gathered .just as the tree allows us to construct a force for each particle , we can also sum up the column density contributions of the nodes to create a steradian map of the column density during the tree - walk .a schematic diagram of how this works is shown in figure [ fig : treecoldia ] . the target particle the one currently walking the tree , and for which the map is being created is shown as the large dark blue particle on the left . around itwe show the spherical grid onto which the column densities are to be mapped .we see that the tree nodes , shown on the right , subtend some angle ( which is less than some adopted ) , and cover different pixels on the spherical grid . during the treewalk , the method simply maps the projection of the nodes onto the pixels for the particle being walked .schematic showing the overlap between a pixel on the sph particle s _ healpix _ sphere , and the tree node .the angular size of the pixels and nodes are denoted by and , respectively , and the distances between their centres are given by the orthogonal angles and .the diagram shows the case when the angle subtended by the tree node is greater than that of the pixels , and the two possible situations that can arise : a ) the pixel and tree node only partially overlap , and b ) the pixel is entirely covered by the tree node . in the former case , we work out the mass in the overlapping area , and convert it to a column density contribution by smearing it over the pixel s area . in the latter case, the pixel just obtains the full column density of the node . in the case where the angle subtended by the pixels is greater than the tree node ( not shown here ) , then obviously the tree node can also become totally covered by the pixel . in this case , the full mass of the node is smeared out over the pixel s area to define the column density contribution .full details of how the mapping is done in this implementation are given in section [ implement].,width=307 ] illustration showing how the nodes and pixels are assumed to interact in our implementation of the algorithm .for each tree node , a new co - ordinate system is created , in which the node s position vector is the -axis .the angular distance between the node centres , can then be described by two orthogonal angles , and , which allows us to define an overlap area .note that nodes and pixels have an area and respectively .full details are given in section [ implement].,width=307 ] the details of exactly how the nodes are mapped onto the grid depends on how accurate one needs the column density information to be .however , it should be noted that the tree structure is only an approximate representation of the underlying gas structure : it distributes the mass in a somewhat larger volume than is actually the case , and as a result , sharp edges tend to be displaced to the boundary of node . as such , column densities from the tree will always be approximate , and so a highly accurate mapping of the node column density projections is computationally wasteful . inwhat follows , we will describe a simple implementation of that is both reasonably accurate while at the same time requiring minimal computational cost .our mapping of the tree nodes to the pixels makes a number of assumptions regarding the shape and projection of the nodes and the pixels .these are : * the tree nodes are always seen as squares in the sky , regardless of their actual orientation . *the nodes are assumed to overlap the pixels as shown in figure [ fig : nodepro ] , such that we can define the overlapping region based on simple orthogonal co - ordinates in the plane of the sky .* we use the _ healpix _ algorithm to compute pixels that are equidistant on the sphere s surface and that have equal areas .we show a schematic diagram of the way the nodes are assumed to overlap in figure [ fig : pixdia ] .the tree nodes are taken to be squares with side length and likewise , the pixels onto which the column densities mapped are assumed to be squares with side length . as shown in the diagram , these dimensions are assumed to be equivalent to the angles subtended by the nodes and the pixels .overlap requires that and if this is the case , the lengths , and describing the overlapping area are then given by and when the pixels have a smaller angular extent than the nodes ( i.e. ) , or and when the nodes have a smaller angular extent than the pixels ( i.e. ) . by taking the minimum of the expression and either the node or pixel side length , we account for situations such as those shown in the right - hand panel in figure [ fig : pixdia ] , in which either the pixel is totally covered by the node , or the node is totally contained within the pixel .we can then calculate the contribution of the node to the pixel s column density from this expression is formed by considering the mass in the overlapping area given by .if the pixel is totally covered by the node , then it gets the full column density of the node .if the pixel is only partially covered by the node , then the mass in the overlapping region is smeared out over the area of the pixel , to create a new column density . if the node is totally contained within the pixel , then obviously all the mass from the node is smeared out over the pixel s area .clearly , the ability of and to describe the overlapping area breaks down near the poles , with the extreme case where a pixel directly over either of the poles can not be described by a . to account for this, we move to a co - ordinate system in which the tree node s position vector , describes a new -axis ( ) , such that the node is always located at ( 1 , 0 , 0 ) . to define the other two axis ( the new and axis ) ,we first define a control vector , , that is close to the node s position vector , displaced by a small amount in and . for the displacement we choose ( somewhat arbitrarily ) that decreases by for and increases by for , and that always increases by then define the new axis vectors from : the pixel unit vectors are then rotated into this co - ordinate system simply by taking the scalar product with each of the axes : a schematic diagram of how the pixels and nodes are defined in this new coordinate system is given in figure [ fig : nodepro ] . the angular distances and can then be defined simply from : and , to increase the speed of the algorithm , these inverse trigonometric functions can be made into look - up tables . it should be noted that after this rotation , the pixels and even the tree node itself are in general _ not _ aligned as they appear in figure [ fig : nodepro ] . in the above coordinate transform , the control vector determines how the new coordinate vectors and are orientated with respect to the new x axis , ( that is , how and are rotated _ around _ ) .in fact , it should also be stressed that the _ healpix _ pixels are not aligned as in figure [ fig : nodepro ] _ before _ the rotation , but in fact appear more diamond shaped ( as one can see in the maps in figures 5 - 9 ) .we found that the exact rotation is typically unimportant for the mapping , provided the angular resolution in the map is not significantly smaller than the opening angle used during the tree - walk .we discuss this further in section [ tests ] . in our discussionso far , we have referred only to ` nodes ' , and their properties , but it should be stressed that some of the nodes will be ` leaves ' . in our implementation , the leaves are sph particles , and as was mentioned above , it is customary to use the particle properties directly when evaluating the gravitational forces ( or in our case , the column density ) . as such, we adopt the exact particle position when considering the leaf nodes . however, although sph particles have non - uniform radial column density profiles , we do not take this into account in our implementation , but rather treat the sph particles in the same manner as the other nodes , by assuming that they have a uniform column density , and project a square in the sky , rather than a circle . as our node to pixel mapping is based around mass conservation ( the concept behind equation [ sigmaadd ] ) , we can not simply use the smoothing length to define the square ( so in gadget 2 , or for the definition of in most other sph codes ) , but instead have to define to conserve area , giving , our motivation for treating the sph particles in this way , is that working out the true fraction of the sph particle s mass that falls within the pixels is computationally expensive , requiring a numerical integration over the overlapping areas , since the pixel and the sph particle have quite different shapes .our choice of the _ healpix _ method of pixelating the spheres around the sph particles was motivated by two main factors .first , the pixels in the _ healpix _ mapping are equal area , which simplifies any comparisons of the pixel properties between different maps .second , the equal - area property means that increasing the pixel number is equivalent to increasing the angular resolution of the map _ everywhere on the sphere_. this is not the case with the traditional latitude - longitude discretisation , for example , in which the pixels at the poles have a significantly smaller area than their counterparts at the equator .finally , for our sph code , we use the publicly available code gadget 2 , which uses an oct - tree .for this project , where we need to have control over the opening angle to show how it affects the results , we adopt the standard barnes and hut opening criterion rather than the ` relative error ' criterion suggested by .in this section we apply the algorithm to two very different types of test problem . in the first case , we consider a gas cloud that is an isolated sphere , with conditions similar to the dense cores found in the pipe nebula . for the second test problem , we consider a cloud that is a model of a turbulent molecular cloud , which is representative of the environment in which prestellar cores form .these two different set - ups are typical of those used in contemporary simulations of star and cluster formation . in what follows, we will use hammer projections to display the steradian maps of column densities seen from given locations within the cloud .there are four types of map that we will show .the first is the ` true ' column density map obtained by summing up the contribution from every single sph particle in the simulation . for this type of map, it is customary to take into account the radial density profile of the sph particles , as described by their smoothing kernel .however since this is not done in the implementation we choose not to do this here for the sph maps .instead we assume that each sph particle has a constant column density defined simply by its mass , and radial extent ( that is , the particle s smoothing length ) .the second type of map is the ` pixel - averaged ' map , whereby we pixelate the ` true ' map into a set number of _ healpix _ pixels , by averaging over the points from the hammer projection that lie inside each pixel .this type of map provides a more useful measure than the ` true ' maps , as the results of are also stored on _ healpix _ grids . as such , could be said to be working perfectly if it can recover the same column densities as those shown in the ` pixel - averaged ' maps .the third type of map is simply the column density map produced by .finally , our last type of map describes the error in the method .we define a fractional error for each pixel by where is the column density of pixel in the ` pixel - averaged ' map , and is the column density in pixel recovered by . in our tests , we will also explore the two intrinsic resolutions that are at play in our implementation of .the first is the number of pixels in the _ healpix _ sphere that surrounds the sph particles , which represents the ability of the sph particle to record the column density information that comes from the tree walk .the second resolution at play is the opening angle , , as this determines how accurately the tree is forced to look at the structure in the cloud .together these determine the accuracy and level of detail that is present in the map .the column density hammer projections for a particle sitting on the edge of a centrally condensed sphere .the upper left panel shows the column density projection from all sph particles in the simulation volume .the other panels then show the same steradian map pixelated into 48 , 192 , and 768 _ healpix _ pixels .the pixel values are simply averages of hammer projection points that lie inside each pixel s boundary.,width=326 ] the maps recovered by our implementation , for the column density distribution shown in figure [ spheretrue ] .the maps are shown for two different measures of the resolution : the opening angle , , of the tree ( a measure of how well can ` see ' the cloud ) , and the number of pixels in the healpix map ( a measure of how accurately s results are stored).,width=326 ] in the first test problem , we consider a particle located at the edge of a spherical , isothermal cloud with a mean mass density of g , a temperature of 10 k and a mass of 1.33 .the cloud is modelled with 10,000 sph particles , and hence the mass resolution is comparable to that used in contemporary models of cluster formation ( e.g. ) .the cloud is gravitationally unbound , but confined by an external pressure of and has been allowed to settle into hydrostatic equilibrium .it centrally condenses into a stable bonnor - ebert sphere with a central density of g and an outer density of g at a radius of 0.09 pc .the column density map of the sky , as seen by the particle at the edge , is shown in hammer projections in figure [ spheretrue ] . in this figure, we show the ` true ' column density map , as calculated from the individual sph particles that make up the cloud , and also the map pixelated into 48 , 192 , and 768 _ healpix _ pixels to create the ` pixel - averaged ' maps described above . on the left - hand side of the hammer projectionsone can see the high column density of the centrally condensed core of the sphere , and on the right - hand side of each map one can see the edge of the sphere , and the empty void beyond .although such a simple cloud geometry may seem trivial , it actually represents a stringent test of the algorithm .first , the tree itself is made up of a series of boxes , and so the intrinsic geometries of the cloud and tree are quite different .second , we would expect that the rapidly evolving gradient in the column densities associated with the sharp edge of the cloud will be difficult for the tree to capture , as the edges of the nodes will tend to be in a different place , as discussed above . despite these difficulties , the algorithm is able to capture the main features of the cloud fairly well .figure [ spheretreecol ] shows the representation of the sky maps given in figure [ spheretrue ] for two different tree opening angles , and .we can see that the column density towards the centre of the cloud is well represented , and that the maps have the same overall features as those in figure [ spheretrue ] : high column density on one side , and a fairly sharp decline on the other side where the column density falls to zero .although the images in figure [ spheretreecol ] give an idea of the structure and boundaries that is able to reproduce , it is difficult to gauge the quantitative accuracy of the method . a better representation is shown in figure [ sphereerror ] , where we plot the relative error in the maps .here we see that the error is typically less than 10 percent when the column density is high , but can be as large as around 100 percent when the column density is low , or approaching zero . the high error ( around 50 percent ) in the middle of the map ( and the outer extremities ) comes from the fact that the boundaries of the tree nodes are not necessarily aligned with the edge of the particle distribution . as we increase s ability to see the structure in the cloud , by reducing the opening angle , we see that the error at the boundary decreases .overall , the best representation of the cloud s boundary ( and indeed the cloud itself ) is found in the 48 pixel map that was run with a tree opening angle of 0.3 .this is unsurprising , as the low resolution of the pixel - averaged map is also unable to capture the sharp fall in the column density at the cloud s boundary , while at the same time the smaller opening angle ensures that the pixels on the boundary are not assigned mass that belongs to further inside the cloud . in general , we see that the smaller opening angles tend to produce better maps for a given pixellation .this is expected , since as the opening angle is reduced , the properties of the tree nodes become closer to the actual distribution of the particles .this is most obviously apparent in the 768 pixel map , where we see that the map obtained for contains artefacts from the underlying boxy structure of the tree , while the map is much smoother . in the maps with a lower number of pixels ,these features are not so apparent as the structure of tree is more smeared out .perhaps a more useful measure of the ability of to sample its surroundings is the error in the average column density in the map , as given in table [ meancol ] . for the spherical cloud set - up, we find that the average column is between 4.2 and 7 percent higher than the average in the true map , with the lowest resolution run ( , n = 48 ) having the largest overall error , and the highest resolution run ( , n = 768 ) having the lowest error .the fact that these errors are so low reflects the fact that the mean is dominated by the high column density regions , which are recovered well by in all the resolutions we study .the relative error ( computed according to equation [ frac_error ] ) based on the difference between the maps shown in figure [ spheretreecol ] and the pixellated maps shown in figure [ spheretrue].,width=326 ] the column density sky - map as seen by a low - density particle in a turbulent molecular cloud simulation . as in figure [ spheretrue ] , the upper - left panel is obtained by adding up the contributions from all sph particles in the computational volume ( excluding the particle from which the sky is viewed ) .the other panels then show a ` pixel - averaged ' view of the cloud , as would be seen if we only had 48 , 192 and 768 pixels in our map.,width=326 ] the left - hand panels show the maps for the turbulent cloud set - up shown in figure [ turbtrue ] , for 48 , 192 and 768 pixels in the map .all maps were produced using a tree opening angle .the right - hand panels show the relative error in the maps.,width=326 ] .a summary of the mean column densities in the cloud models presented in sections [ sec : spheretest ] and [ sec : turbtest ] , for both the true map ( the first line ) and each of the maps . for the results we give the number of pixels used in the column density map ( n ) , the opening angle of the tree ( ) , and the percentage error compared to the true map from the sph particles .note that due to the way the pixel - averaged maps are obtained ( see section [ tests ] ) , their average column density is identical to that in the full sph map , and so we do not include it here . [ cols="^,^,^,^,^ " , ] [ scaling ]although the parallel efficiency of gadget 2 for this problem is not particularly high to begin with , it is clear that the use of does not have very much influence on the scaling , suggesting that the additional communications overhead is not a significant problem in comparison to the inherent difficulties involved in properly load - balancing a simulation of this type . however , as with any computational method , there are drawbacks to our approach .one of the main downsides of the method is that it can introduce a significant memory overhead. the exact memory requirements of can vary considerably , depending on the type of tree employed by the code , how the column density information is being used , and whether the code is parallelized using the message passing interface ( mpi ) protocol , or using the openmp protocol . in gadget 2 for example a code that is mpi parallelized copies of the sph particles on a given cpu are sent to all the other cpus to get the contributions to the gravitational force from the particles that reside there .an implementation in gadget 2 must then store two copies of the column density map for each particle : one that is broadcast to the other cpus to pick up their contributions , and one that resides on the home cpu that collects the local contributions and stores the final total .other tree codes parallelized using mpi work differently , sending the necessary information _ from _ the other cpus to the cpu with the target particle , requiring that only one map be stored per particle .in fact , if the column density map is only needed once for example , to compute a mean extinction then only the particle currently walking the tree needs a column density map . in this casethe information stored in the map can be used at the end of particle s walk , and the map can then be cleared in preparation to be re - used in the next particle s tree - walk . so depending on the application , and on the code , the memory requirements for can be anywhere from one map per parallel task , to two maps per particle .we have present a new tree - based technique for obtaining a full steradian map of the column densities at every location in numerical fluid simulations .the method piggy - backs on a tree - based gravitational force calculation , by making use of the information that is already stored in the tree namely the mass , position , and size of the tree nodes to construct a map of the column density distribution in the sky as seen by each fluid element .as the underlying algorithm is based on the tree , the method inherits the same scaling as the tree code .the fact that the method makes use of physical quantities that are already stored in the tree means that it is simple to implement , and requires only minimal modification to the underlying tree algorithm . in the case where the tree has been parallelised ,we find that the inclusion of does not significantly affect the parallel scaling of the code . in this paper, we describe a simple implementation of that we find to yield column density maps that are accurate to better than 10 percent on average . in this implementation which in our case was made within the publicly available sph code gadget 2 we adopt the _ healpix _ pixelisation scheme to define the pixellated map in which the column densities for each particle are stored . as an example application ofwe show how the method can be used to calculate the dust heating of prestellar cores by the interstellar radiation field .the results are compared with those from the monte carlo radiative transfer code radmc-3d . comparing our lowest resolution results 48 pixels in the steradian _ healpix _ map and a tree opening angle of 0.5 to a 20 million photon packet radmc-3d calculation , we find that the two methods yield radial dust temperature profiles that agree to within 0.5k .we also discuss some other applications in which we expect to be useful , such as the attenuation of uv radiation and its effect on the chemical and thermal balance of molecular clouds or the x - ray heating of the intergalactic medium .we would like to thank mordecai - mark mac low , tom abel , gabriel aorve , and lszl szcs , for many interesting discussions regarding the method , and help with assembling the final manuscript .the authors acknowledge financial support from the landesstiftung baden - wrrtemberg via their program internationale spitzenforschung ii ( grant p - ls - spii/18 ) , from the german bundesministerium fr bildung und forschung via the astronet project star format ( grant 05a09vha ) , from the dfg under grants no .kl1358/10 and kl1358/11 , and via the sfb 881 `` the milky way galaxy '' , as well as from a frontier grant of heidelberg university sponsored by the german excellence initiative .the simulations reported on in this paper were primarily performed using the _ kolob _ cluster at the university of heidelberg , which is funded in part by the dfg via emmy - noether grant ba 3706 . | we present _ treecol _ , a new and efficient tree - based scheme to calculate column densities in numerical simulations . knowing the column density in any direction at any location in space is a prerequisite for modeling the propagation of radiation through the computational domain . _ treecol _ therefore forms the basis for a fast , approximate method for modelling the attenuation of radiation within large numerical simulations . it constructs a _ healpix _ sphere at any desired location and accumulates the column density by walking the tree and by adding up the contributions from all tree nodes whose line of sight contributes to the pixel under consideration . in particular when combined with widely - used tree - based gravity solvers the new scheme requires little additional computational cost . in a simulation with resolution elements , the computational cost of _ treecol _ scales as , instead of the scaling of most other radiative transfer schemes . _ treecol _ is naturally adaptable to arbitrary density distributions and is easy to implement and to parallelize , particularly if a tree structure is already in place for calculating the gravitational forces . we describe our new method and its implementation into the sph code gadget 2 . we discuss its accuracy and performance characteristics for the examples of a spherical protostellar core and for the turbulent interstellar medium . we find that the column density estimates provided by _ _ are on average accurate to better than 10 percent . in another application , we compute the dust temperatures for solar neighborhood conditions and compare with the result of a full - fledged monte carlo radiation - transfer calculation . we find that both methods give very similar answers . we conclude that _ treecol _ provides a fast , easy to use , and sufficiently accurate method of calculating column densities that comes with little additional computational cost when combined with an existing tree - based gravity solver . methods : numerical radiative transfer |
the discovery of new models of quantum computation ( qc ) , such as the one - way quantum computer and the projective measurement - based model , have opened up new experimental avenues toward the realisation of a quantum computer in laboratories . at the same timethey have challenged the traditional view about the nature of quantum computation . since the introduction of the quantum turing machine by deutsch , unitary transformations plays a central rle in qc .however , it is known that the action of unitary gates can be simulated using the process of quantum teleportation based on projective measurements - only .characterizing the minimal resources that are sufficient for this model to be universal , is a key issue .resources of measurement - based quantum computations are composed of two ingredients : ( ) a universal family of observables , which describes the measurements that can be performed ( ) the number of ancillary qubits used to simulate any unitary transformation .successive improvements of the upper bounds on these minimal resources have been made by leung and others .these bounds have been significantly reduced when the state transfer ( which is a restricted form of teleportation ) has been introduced : one two - qubit observable ( ) and three one - qubit observables ( , and ) , associated with only one ancillary qubit , are sufficient for simulating any unitary - based qc .are these resources minimal ? in , a sub - family of observables ( , , and ) is proved to be universal , however two ancillary qubits are used to make this sub - family universal .these two results lead to an open question : is there a trade - off between observables and ancillary qubits in measurement - based qc ? in this paper , we reply in the negative to this open question by proving that the sub - family is universal using only one ancillary qubit , improving the upper bound on the minimal resources required for measurement - based qc .the simulation of a given unitary transformation by means of projective measurements can be decomposed into : * ( _ step of simulation _ ) first , is probabilistically simulated up to a pauli operator , leading to , where is either identity or a pauli operator , or . *( _ correction _ ) then , a corrective strategy consisting in combinig conditionally steps of simulation is used to obtain a non - probabilistic simulation of .teleportation can be realized by two successive bell measurements ( figure [ fig : telep ] ) , where a bell measurement is a projective measurement in the basis of the bell states , where .a step of simulation of is obtained by replacing the second measurement by a measurement in the basis ( figure [ fig : gtelep ] ) . up to a pauli operator ]if a step of simulation is represented as a probabilistic black box ( figure [ fig : blackbox ] , left ) , there exists a corrective strategy ( figure [ fig : blackbox ] , right ) which leads to a full simulation of .this strategy consists in conditionally composing steps of simulation of , but also of each pauli operator .a similar step of simulation and strategy are given for the two - qubit unitary transformation ( _ controlled_- ) in .notice that this simulation uses four ancillary qubits . as a consequence ,since any unitary transformation can be decomposed into and one - qubit unitary transformations , any unitary transformation can be simulated by means of projective measurements only .more precisely , for any circuit of size with basis and all one - qubit unitary transformations and for any , projective measurements are enough to simulate with probability greater than . approximative universality , based on a finite family of projective measurements , can also be considered .leung has shown that a family composed of five observables is approximatively universal , using four ancillary qubits .it means that for any unitary transformation , any and any , there exists a conditional composition of projective measurements from leading to the simulation of a unitary transformation with probability greater than and such that . in order to decrease the number of two - qubit measurements four in and the number of ancillary , an new scheme called _ state transfer _ has been introduced .the state transfer ( figure [ fig : statetrans ] ) replaces the teleportation scheme for realizing a step of simulation .composed of one two - qubit measurements , two one - qubit measurements , and using only one ancillary qubit , the state transfer can be used to simulate any one - qubit unitary transformation up to a pauli operator ( figure [ fig : statetransgene ] ) .for instance , simulations of and see section [ sec : unituniv ] for definitions of and are represented in figure [ fig : simulh ] . moreovera scheme composed of two two - qubit measurements , two one - qubit measurements , and using only one ancillary qubit can be used to simulated up to a pauli operator ( figure [ fig : simulcnot ] ) .since is a universal family of unitary transformations , the family of observables is approximatively universal , using one ancillary qubit .this result improves the result by leung , since only one two - qubit measurement and one ancillary qubit are used , instead of four two - qubit measurements and four ancillary qubits .moreover , one can prove that at least one two - qubit measurement and one ancillary qubit are required for approximative universality .thus , it turns out that the upper bound on the minimal resources for measurement - based qc differs form the lower bound , on the number of one - qubit measurements only . and up to a pauli operator.,title="fig : " ] and up to a pauli operator.,title="fig : " ] up to a pauli operator .] in , it has been shown that the number of these one - qubit measurements can be decreased , since the family , composed of one two - qubit and only two one - qubit measurements , is also approximatively universal , using _ two _ ancillary qubit . the proof is based on the simulation of -measurements by means of and measurements ( figure [ fig : xsimul ] ) .this result leads to a possible _ trade - off _ between the number of one - qubit measurements and the number of ancillary qubits required for approximative universality .-measurement simulation ] in this paper , we meanly prove that the family is approximatively universal , using only one ancillary qubit .thus , the upper bound on the minimal resources required for approximative universality is improved , and moreover we answer the open question of the trade - off between observables and ancillary qubits . notice that we prove that the trade - off conjectured in does not exist , but another trade - off between observables and ancillary qubits may exist since the bounds on the minimal resources for measurement - based quantum computation are not tight .there exist several universal families of unitary transformations , for instance is one of them : , , we prove that the family is also approximatively universal : [ thm : dd ] is approximatively universal . the proof is based on the following properties .let be the rotation of the bloch sphere about the axis through an angle .[ univer : rotdef ] if is a real unit vector , then for any , .[ univer : approxrot ] for a given vector of the bloch sphere , if is an irrational multiple of , then for any and any , there exists such that [ univer : decomprot ] if and are non parallel vectors of the bloch sphere , then for any one - qubit unitary transformation , there exists such that : [ univer : irra ] if is not an integer multiple of and , then either or is an irrational multiple of ._ proof of theorem [ thm : dd ] : _first we prove that any -qubit unitary transformation can be approximated by and .consider the unitary transformations , , .notice that is , up to an unimportant global phase , a rotation by radians around axis on the block sphere : [ cols="<,^,<,^ , < " , ] is a rotation of the bloch sphere about an axis along and through the angle .thus , for any and any , there exists such that since and are non - parallel , any one - qubit unitary transformation , according to proposition [ univer : approxrot ] , can be decomposed into rotations around and : there exist such that finally , for any and , there exist such that thus , any one - qubit unitary transformation can be approximated by means of , and . since and , the family approximates any one - qubit unitary transformation . with the additional gate ,the family is approximatively universal .in , the family of observables is proved to be approximatively universal using two ancillary qubits .we prove that this family requires only one ancillary qubit to be universal : [ thm : obsmin ] is approximatively universal , using one ancillary qubit only .the proof consists in simulating the unitary transformations of the universal family .first , one can notice that can be simulated up to a pauli operator , using measurements of , as it is depicted in figure [ fig : simulh ] .so , the universality is reduced to the ability to simulate and the pauli operators pauli operators are needed to simulated , but also to perform the corrections required by the corrective strategy ( figure [ fig : blackbox ] ) .[ lem : simulcz ] for a given 2-qubit register and one ancillary qubit , the sequence of measurements according to , , , and ( see figure [ fig : simulcz ] ) simulates on qubits , up to a pauli operator .the resulting state is located on qubits and . ] _ proof : _ one can show that , if the state of the register is before the sequence of measurements , the state of the register after the measurements is , where and s are the classical outcomes of the sequence of measurements . in order to simulate pauli operators , a new scheme , different from the state transfer , is introduced .[ lem : simulz ] for a given qubit and one ancillary qubit , the sequence of measurements , , and ( figure [ fig : simulz ] ) simulates , on qubit , the application of with probability and with probability . , scaledwidth=50.0% ] _ proof : _ let be the state of qubit .after the first measurement , the state of the register is where is the classical outcome of the measurement .since , the state of the register is : let be the outcome of the last measurement , on qubit .if then state of the qubit is , and otherwise .one can prove that these two possibilities occur with equal probabilities .[ lem : simulx ] for a given qubit and one ancillary qubit , the sequence of measurements , , and ( figure [ fig : simulx ] ) simulates , on qubit , the application of with probability and with probability ., scaledwidth=50.0% ] the proof of lemma [ lem : simulx ] is similar to the proof of lemma [ lem : simulz ] ._ proof of theorem [ thm : obsmin ] : _ first notice that the family of unitary transformations is approximatively universal since is universal . and can be simulated up to a pauli operator ( lemmas [ lem : simulcz ] ) .the universality of the family of observables is reduced to the ability to simulate any pauli operators .lemma [ lem : simulx ] ( resp .lemma [ lem : simulz ] ) , shows that ( ) can be simulated with probability , moreover if the simulation fails , the resulting state is same as the original one .thus , this simulation can be repeated until a full simulation of ( ) . finally , can be simulated , up to a global phase , by combining simulations of and .thus , is approximatively universal using only one ancillary qubit . have proved a new upper bound on the minimal resources required for measurement - based qc : one two - qubit , and two one - qubit observables are universal , using one ancillary qubit only .this new upper bound has experimental applications , but allows also to prove that the trade - off between observables and ancillary qubits , conjectured in , does not exist .this new upper bound is not tight since the lower bound on the minimal resources for this model is one two - qubit observable and one ancillary qubit . | we improve the upper bound on the minimal resources required for measurement - based quantum computation . minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements . this new upper bound allows also to reply in the negative to the open question presented in about the existence of a trade - off between observable and ancillary qubits in measurement - based quantum computation . |
training neural networks requires setting several hyper - parameters to adequate values : number of hidden layers , number of neurons per hidden layer , learning rate , momentum , dropout rate , etc .although tuning such hyper - parameters via parameter search has been recently investigated by , doing so remains extremely costly , which makes it imperative to develop more efficient techniques . of the many hyper - parameters , those that determine the network s architecture are among the hardest to tune , especially because changing them during training is more difficult than adjusting more dynamic parameters such as the learning rate or momentum .typically , the architecture parameters are set once and for all before training begins .thus , assigning them correctly is paramount : if the network is too small , it will not learn well ; if it is too large , it may take significantly longer to train while running the risk of overfitting .networks are therefore usually trained with more parameters than necessary , and pruned once the training is complete .this paper introduces divnet , a new technique for reducing the size of a network . divnetdecreases the amount of redundancy in a neural network ( and hence its size ) in two steps : first, it samples a diverse subset of neurons ; then , it merges the remaining neurons with the ones previously selected .specifically , divnetmodels neuronal diversity by placing a determinantal point process ( dpp ) over neurons in a layer , which is then used to select a subset of diverse neurons .subsequently , divnet``fuses '' information from the dropped neurons into the selected ones through a reweighting procedure .together , these steps reduce network size ( and act as implicit regularization ) , without requiring any further training or significantly hurting performance .divnetis fast and runs in time negligible compared to the network s prior training time .moreover , it is agnostic to other network parameters such as activation functions , number of hidden layers , and learning rates . for simplicity ,we describe and analyze divnetfor feed - forward neural networks , however divnetis _ not _ limited to this setting . indeed , since divnetoperates on a layer fully connected to the following one in a network s hierarchy, it applies equally well to other architectures with fully connected layers .for example , it can be applied without any further modification to deep belief nets and to the fully - connected layers in convolutional neural networks . as these layers are typically responsible for the large majority of the cnns memory footprint , divnetis particularly well suited for such networks .[ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + the key contributions of this paper are the following : =1em introduction of dpps as a flexible , powerful tool for modeling layerwise neuronal diversity ( [ sec : pruning ] ) .specifically , we present a practical method for creating dpps over neurons , which enables diversity promoting sampling and thereby leads to smaller network sizes .a simple but crucial `` fusing '' step that minimizes the adverse effects of removing neurons .specifically , we introduce a reweighting procedure for a neuron s connections that transfers the contributions of the pruned neurons to the ones that are retained ( [ sec : reweighting ] ) .the combination of both ideas is called divnet .we perform several experiments to validate divnetand compare to previous neuron pruning approaches , which divnetconsistently outperforms . notably , divnet s reweighting strategy benefits other pruning approaches . [ [ related - work . ] ] related work .+ + + + + + + + + + + + + due to their large number of parameters , deep neural networks typically have a heavy memory footprint .moreover , in many deep neural network models parameters show a significant amount of redundancy .consequently , there has been significant interest in developing techniques for reducing a network s size without penalizing its performance . a common approach to reducingthe number of parameters is to remove connections between layers . in , connections are deleted using information drawn from the hessian of the network s error function . the number of parameters by analyzing the weight matrices , and applying low - rank factorization to the final weight layer . remove connections with weights smaller than a given threshold before retraining the network .these methods focus on deleting parameters whose removal influences the network the least , while divnetseeks diversity and merges similar neurons ; these methods can thus be used in conjunction with ours .although methods such as that remove connections between layers may also delete neurons from the network by removing all of their outgoing or incoming connections , it is likely that the overall impact on the size of the network will be lesser than approaches such as divnetthat remove entire neurons : indeed , removing a neuron decreases the number of rows or columns of the weight matrices connecting the neuron s layer to both the previous and following layers .convolutional neural networks replace fully - connected layers with convolution and subsampling layers , which significantly decreases the number of parameters .however , as cnns still maintain fully - connected layers , they also benefit from divnet .closer to our approach of reducing the number of hidden neurons is , which evaluates each hidden neuron s importance and deletes neurons with the smaller importance scores . in , a neuron is pruned when its weights are similar to those of another neuron in its layer , leading to a weight update procedure that is somewhat similar in idea ( albeit simpler ) to our reweighting step : where removes neurons with equal or similar weights , we consider the more complicated task of merging neurons that , as a group , perform redundant calculations based on their activations .other recent approaches consider network compression without pruning : in , a new , smaller network is trained on the outputs of the large network ; use hashing to reduce the size of the weight matrices by forcing all connections within the same hash bucket to have the same weight . and that networks can be trained and run using limited precision values to store the network parameters , thus reducing the overall memory footprint .we emphasize that divnet s focus on neuronal diversity is orthogonal and complementary to prior network compression techniques .consequently , divnetcan be combined , in most cases trivially , with previous approaches to memory footprint reduction .in this section we introduce our technique for modeling neuronal diversity more formally .let denote the training data , a layer of neurons , the activation of the -th neuron on input , and the activation vector of the -th neuron obtained by feeding the training data through the network . to enforce diversity in layer ,we must determine which neurons are computing redundant information and remove them . doing so requires finding a maximal subset of ( linearly ) independent activation vectors in a layer and retaining only the corresponding neurons . in practice , however , the number of items in the training set ( or the number of batches ) can be much larger than the number of neurons in a layer , so the activation vectors are likely linearly independent .merely selecting by the maximal subset may thus lead to a trivial solution that selects all neurons . reducing redundancytherefore requires a more careful approach to sampling .we propose to select a subset of neurons whose activation patterns are diverse while contributing to the network s overall computation ( i.e. , their activations are not saturated at 0 ) .we achieve this diverse selection by formulating the neuron selection task as sampling from a determinantal point process ( dpp ) .we describe the details below .dpps are probability measures over subsets of a ground set of items .originally introduced to model the repulsive behavior of fermions , they have since been used fruitfully in machine - learning .interestingly , they have also been recently applied to modeling inter - neuron inhibitions in neural spiking behavior in the rat hippocampus .dpps present an elegant mathematical technique to model diversity : the probability mass associated to each subset is proportional to the determinant ( hence the name ) of a dpp kernel matrix .the determinant encodes negative associations between variables , and thus dpps tend to assign higher probability mass to diverse subsets ( corresponding to diverse submatrices of the dpp kernel ) .formally , a ground set of items and a probability $ ] such that where is a -by- positive definite matrix , form a dpp . is called the _ dpp kernel _ ; here , indicates the principal submatrix of indexed by the elements of .the key ingredient that remains to be specified is the dpp kernel , which we now describe .there are numerous potential choices for the dpp kernel .we found that experimentally a well - tuned gaussian rbf kernel provided a good balance between simplicity and quality : for instance , it provides much better results that simple linear kernels ( obtained via the outer product of the activation vectors ) and is easier to use than more complex gaussian rbf kernels with additional parameters .a more thorough evaluation of kernel choice is future work .recall that layer has activations . using these , we first create an kernel with bandwidth parameter by setting to ensure strict positive definiteness of the kernel matrix , we add a small diagonal perturbation to ( ) .the choice of the bandwidth parameter could be done by cross - validation , but that would greatly increase the training cost .therefore , we use the fixed choice , which was experimentally seen to work well . finally , in order to limit rounding errors , we introduce a final scaling operation : suppose we wish to obtain a desired size , say , of sampled subsets ( in which case we are said to be using a -dpp ) . to that end , we can scale the kernel by a factor , so that its _ expected _ sample size becomes . for a dpp with kernel , the expected sample size is given by ( * ? ? ?34 ) : = \operatorname{tr}(l ( i+l)^{-1}).\ ] ] therefore , we scale the kernel to with such that where is the expected sample size for the kernel . finally , generating and then sampling from has cost . in our experiments , this sampling cost was negligible compared with the cost of training . for networks with very large hidden layers, one can avoiding the cost by using more scalable sampling techniques . simply excising the neurons that are not sampled by the dppdrastically alters the neuron inputs to the next layer .intuitively , since activations of neurons marked redundant are not arbitrary , throwing them away is wasteful .ideally we should preserve the total information of a given layer , which suggests that we should `` fuse '' the information from unselected neurons into the selected ones .we achieve this via a reweighting procedure as outlined below . without loss of generality , let neurons 1 through be the ones sampled by the dpp and their corresponding activation vectors .let be the weights connecting the -th neuron ( ) in the current layer to the -th neuron in the next layer ; let denote the updated weights after merging the contributions from the removed neurons .we seek to minimize the impact of removing neurons from layer . to that end, we minimize the difference in inputs to neurons in the subsequent layer before ( ) and after ( ) dpp pruning .that is , we wish to solve for all neurons in the next layer ( indexed by , ) : eq . [ eq : reweighting ] is minimized when is the projection of onto the linear space generated by .thus , to minimize eq .[ eq : reweighting ] , we obtain the coefficients that for minimize and then update the weights by setting using ordinary least squares to obtain , the reweighting procedure runs in .to quantify the performance of our algorithm , we present below the results of experiments on common datasets for neural network evaluation : ` mnist ` , ` mnist_rot ` and ` cifar-10 ` .all networks were trained up until a certain training error threshold , using softmax activation on the output layer and sigmoids on other layers ; see table [ tab : networks ] for more details . in all following plots ,error bars represent standard deviations ..overview of the sets of networks used in the experiments .we train each class of networks until the first iteration of backprop for which the training error reaches a predefined threshold . [ cols="^,^,^,^",options="header " , ] to validate our claims on the benefits of using dpps and fusing neurons , we compare these steps separately and also simultaneously against random pruning , where a fixed number of neurons are chosen uniformly at random from a layer and then removed , with and without our fusing step .we present performance results on the test data ; of course , both dpp selection and reweighting are based solely on information drawn from the training data .figure [ fig : reconstruction ] visualizes neuron activations in the first hidden layer of a network trained on the ` mnist ` dataset .each column in the plotted heat maps represents the activation of a neuron on instances of digits 0 through 9 .figure [ fig : reconstruction - dpp ] shows the activations of the 50 neurons sampled using a -dpp ( = 50 ) defined over the first hidden layer , whereas figure [ fig : reconstruction - first ] shows the activations of the first 50 neurons of the same layer .figure [ fig : reconstruction - first ] contains multiple similar columns : for example , there are 3 entirely green columns , corresponding to three neurons that saturate to 1 on each of the 10 instances . in contrast , the dpp samples neurons with diverse activations , and figure [ fig : reconstruction - dpp ] shows no similar redundancy. figures [ fig : k - dpp ] through [ fig : reweighting - pruning - influence ] illustrate the impact of each step of divnetseparately .figure [ fig : k - dpp ] shows the impact of pruning on test error using dpp pruning and random pruning ( in which a fixed number of neurons are selected uniformly at random and removed from the network ) .dpp - pruned networks have consistently better training and test errors than networks pruned at random for the same final size .as expected , the more neurons are maintained , the less the error suffers from the pruning procedure ; however , the pruning is in both cases destructive , and is seen to significantly increase the error rate .this phenomenon can be mitigated by our reweighting procedure , as shown in figure [ fig : reweighting - influence ] . by fusing and reweighting neurons after pruning , the training and test errorsare considerably reduced , even when 90% of the layer s neurons are removed .pruning also reduces variability of the results : the standard deviation for the results of the reweighted networks is significantly smaller than for the non - reweighted networks , and may be thus seen as a way to regularize neural networks .finally , figure [ fig : reweighting - pruning - influence ] illustrates the performance of divnet(dpp pruning and reweighting ) compared to random pruning with reweighting .although divnet s performance is ultimately better , the reweighting procedure also dramatically benefits the networks that were pruned randomly ..45 [ fig : k - dpp - test - error ] -.4 cm .45 [ fig : k - dpp - training - error ] -.4 cm .45 [ fig : reweighting - influence - test - error ] -.4 cm .45 [ fig : reweighting - influence - training - error ] -.4 cm .45 [ fig : reweighting - pruning - influence - test - error ] -.4 cm .45 [ fig : reweighting - pruning - influence - training - error ] -.4 cm we also ran these experiments on networks for shrinking the second layer while maintaining the first layer intact .the results are similar , and may be found in appendix [ appendix : second - layer ] .notably , we found that the gap between divnetand random pruning s performances was much wider when pruning the last layer .we believe this is due to the connections to the output layer being learned much faster , thus letting a small , diverse subset of neurons ( hence well suited to dpp sampling ) in the last hidden layer take over the majority of the computational effort .much attention has been given to reducing the size of neural networks in order to reduce memory consumption .when using neural nets locally on devices with limited memory , it is crucial that their memory footprint be as small as possible . .45 [ fig : activation - pruning - mnist ] -.4 cm .45 [ fig : activation - pruning - rot ] -.4 cm node importance - based pruning ( henceforth `` importance pruning '' ) is one of the most intuitive ways to cut down on network size. introduced to deep networks by , this method removes the neurons whose calculations impact the network the least . among the three solutions to estimating a neuron s importance discussed in , the sum the output weights of each neuron ( the ` onorm ' function ) provided the best results : figure [ fig : activation - pruning ]compares the test data error of the networks after being pruned using importance pruning that uses onorm as a measure of relevance against divnet .since importance pruning deletes neurons that contribute the least to the next layer s computations , it performs well up to a certain point ; however , when pruning a significant amount of neurons , this pruning procedure even removes neurons performing essential calculations , hurting the network s performance significantly .however , since divnetfuses redundant neurons , instead of merely deleting them its resulting network delivers much better performance even with very large amounts of pruning .in order to illustrate numerically the impact of divneton network performance , table [ tab : compression - train ] shows network training and test errors ( between 0 and 1 ) under various compression rates obtained with divnet , without additional retraining ( that is , the pruned network is not retrained to re - optimize its weights ) .=1em in all experiments , sampling and reweighting ran several orders of magnitude faster than training ; reweighting required significantly more time than sampling .if divnetmust be further sped up , a fraction of the training set can be used instead of the entire set , at the possible cost of subsequent network performance .when using dpps with a gaussian rbf kernel , sampled neurons need not have linearly independent activation vectors : not only is the dpp sampling probabilistic , the kernel itself is not scale invariant . indeed , for two collinear but unequal activation vectors , the corresponding coefficient in the kernel will not be 1 ( or with the update ) .in our work , we selected a subset of neurons by sampling once from the dpp .alternatively , one could sample a fixed amount of times , using the subset with the highest likelihood ( i.e. , largest ) , or greedily approximate the mode of the dpp distribution . our reweighting procedure benefits competing pruning methods as well ( see figure [ fig : reweighting - pruning - influence ] ) .we also investigated dpp sampling for pruning concurrently with training iterations , hoping that this might allow us to detect superfluous neurons before convergence , and thus reduce training time .however , we observed that in this case dpp pruning , with or without reweighting , did not offer a significant advantage over random pruning .consistently over all datasets and networks , the expected sample size from the kernel was much smaller for the last hidden layer than for other layers .we hypothesize that this is caused by the connections to the output layer being learned faster than the others , allowing a small subset of neurons to take over the majority of the computational effort .divnetleverages similarities between the behaviors of neurons in a layer to detect redundant parameters and merge them , thereby enforcing neuronal diversity within each hidden layer . using divnet , large, redundant networks can be shrunk to much smaller structures without impacting their performance and without requiring further training .we believe that the performance profile of divnetwill remain similar even when scaling to larger scale datasets and networks , and hope to include results on bigger problems ( e.g. , imagenet ) in the future .many hyper - parameters can be tuned by a user as per need include : the number of remaining neurons per layer can be fixed manually ; the precision of the reweighting and the sampling procedure can be tuned by choosing how many training instances are used to generate the dpp kernel and the reweighting coefficients , creating a trade - off between accuracy , memory management , and computational time .although divnetrequires the user to select the size of the final network , we believe that a method where no parameter explicitly needs to be tuned is worth investigating .the fact that dpps can be augmented to also reflect different distributions over the sampled set sizes might be leveraged to remove the burden of choosing the layer s size from the user .importantly , divnetis agnostic to most parameters of the network , as it only requires knowledge of the activation vectors .consequently , divnetcan be easily used jointly with other pruning / memory management methods to reduce size .further , the reweighting procedure is agnostic to how the pruning is done , as shown in our experiments . furthermore , the principles behind divnetcan theoretically also be leveraged in non fully - connected settings .for example , the same diversifying approach may also be applicable to filters in cnns : if a layer of the cnn is connected to a simple , feed - forward layer such as the s4 layer in by viewing each filter s activation values as an vector and applying divneton the resulting activation matrix , one may be able to remove entire filters from the network , thus significantly reducing cnn s memory footprint .finally , we believe that investigating dpp pruning with different kernels , such as kernels invariant to the scaling of the activation vectors , or even kernels learned from data , may provide insight into which interactions between neurons of a layer contain the information necessary for obtaining good representations and accurate classification .this marks an interesting line of future investigation , both for training and representation learning .this work is partly supported by nsf grant : iis-1409802 ..49 .49 .49 .49 .49 .49on training error ( using the networks trained on ` mnist ` ) .the dotted lines show min and max errors . ] on the number of neurons that remain after pruning networks trained on ` mnist ` ( when pruning non - parametrically , using a dpp instead of a -dpp . ) ] | we introduce divnet , a flexible technique for learning networks with diverse neurons . divnetmodels neuronal diversity by placing a determinantal point process ( dpp ) over neurons in a given layer . it uses this dpp to select a subset of diverse neurons and subsequently fuses the redundant neurons into the selected ones . compared with previous approaches , divnetoffers a more principled , flexible technique for capturing neuronal diversity and thus implicitly enforcing regularization . this enables effective auto - tuning of network architecture and leads to smaller network sizes without hurting performance . moreover , through its focus on diversity and neuron fusing , divnetremains compatible with other procedures that seek to reduce memory footprints of networks . we present experimental results to corroborate our claims : for pruning neural networks , divnetis seen to be notably superior to competing approaches . |
we first address garcia _ et al . _s concerns about our online survey , which they suggest induced a positivity bias in respondents answers .garcia _ et al ._ claim that a set of function words in the liwc ( language inquiry and word count ) data set show a wide spectrum of average happiness with positive skew ( their fig 1a ) when , according to their interpretation , these words should exhibit a dirac delta function located at neutral ( =5 on a 1 to 9 scale ) .we expose and address two fundamental errors .first , function words in the liwc data set are simply not emotionally neutral .the liwc data set annotates 4487 words and stems on a wide range of dimensions .we find a total of 421 words and 48 stems are coded as function words with 450 matches in our data set when using stems . of these ,only 7 are indicated as emotional ( 5 positive , 2 negative ) which appears to support garcia _ et al ._ s interpretation .however , a straightforward reading of the liwc list of function words reveals that these words readily bear emotional weight as exemplified by `` greatest '' and `` worst '' .we present some of the most extremely and most neutrally rated liwc function words in tab .[ tab : mhl - reply.liwc - function ] .more generally , `` not looking at the words '' and `` not showing the words '' are pervasive issues with word- and phrase - based summary statistics for texts .we should be able to see how specific words contribute to summary statistics for texts to provide ( 1 ) an assurance the measure is performing as intended , and ( 2 ) insight into the text itself .for example , all sentiment scoring algorithms based on words and phrases must be able to plainly show why one text is more positive through changes in word frequency , such as through the word shifts we have developed for both print and as interactive , online visualizations .elsewhere , in studying the google books corpus , we have produced analogous word shifts for the jensen - shannon divergence .we exhort other researchers to produce similar word shifts ( and not just word clouds ) , and to question work with no such counterpart .second , as we discuss in detail in sec .[ sec : mhl - liwc.freqdepend ] below , no statement about biases can be made about sets of words chosen without frequency of usage incorporated .any given set of words may have a positive , neutral , or negative bias , but we must know how they are chosen before being able to generalize ( as we have done thoroughly in ) . because we have no guarantee that the expert - generated liwc function words are exhaustive and because we are merging words of highly variable usage frequency , a finding of an average positive bias for liwc function words is meaningless , regardless of their transparent capacity for being non - neutral . .three subsets of 450 liwc function words with high , neutral , and low average happiness scores from our labmt study ( stems provide more matches than those found by garcia et al . ) .each word s score is the average rating for 50 participants ( scale is 1 to 9 with 1 = most negative , 5 = neutral , and 9 = most positive ) .function words may carry emotional weight and can not be presumed to be neutral . [ cols=">,>",options="header " , ] emotional words in liwc provide another case in point . around 20% of the liwc data set ( 907 words and stems )are denoted as having positive affect ( 160 words and 247 stems ) or negative affect ( 151 words and 349 stems ) .while stems complicate word counts , the liwc data set clearly does not show evidence of a positivity bias .because the liwc data set is expert - curated and meant to be general , it does not fit any natural corpora with respect to usage frequency ( i.e. , liwc words constitute an unsystematic sampling ) .word lists meant to accurately reflect statistical properties of language must be built directly from the most frequently used words of well defined corpora a point we will return to several times in this reply .an earlier expert - curated word list , the smaller anew data set , similarly fails in these respects , showing a fairly flat distribution across average happiness .liwc , along with all word data sets , should not be considered an unimpeachable `` gold standard '' ; language is far too complex to make such an assured statement .all word data sets , including our own , will have limitations .we next contend with a comparison made by garcia _et al . _ between our work on english with a similar sized survey by warriner and kuperman ( wk ) .wk generated a merged list of 13,915 english words , the bulk of which ( 11,826 ) are a list of lemmas taken from movie subtitles .immediately , we have a mismatch : our word list incorporated the 5000 most frequently used words ( or tokens ) in each of four disparate corpora ( new york times , google books , music lyrics , and twitter ) whereas wk s list is mostly lemmas ( e.g. , `` sing '' but not `` sung '' or `` sang '' ) taken from one coherent corpus .further , each word was scored by 50 participants in our study , compared with 1420 for the wk study . in their fig . 1b , garcia _ et al ._ show histograms for the two word lists , which seem to indicate more negative words in the wk list and a higher median word happiness for our word list .but such a comparison is unsound : the words behind each histogram are not the same and word frequency is not being controlled for .the two histograms can not be sensibly compared , and we can discard garcia _s finding that the median level of average word happiness for our full data set is 0.28 above the median level for the wk data set . nevertheless , garcia _ et al . _do appropriately compare the shared subset of words found in both data sets , finding a much smaller difference between median values of of 0.07 .they then suggest that our use of cartoon faces to indicate the 1 to 9 scale of happiness responses induces a positive bias in respondents choices , referencing a study that found a non - smiling face to be slightly negative .their claim lacks foundation for several reasons .first , wk employed a reverse 9 point scale , with 1 = happy and 9 = unhappy , flipping the scores after completing the surveys ( also used in ) .we have no objection to wk s approach but evidently this further complicates any comparison between the two studies .indeed , one might reasonably hypothesize that flipping the direction of the ratings could be the sole cause of the minor discrepancy between the words scored by both studies .second , we gave all participants clear written instructions that 5 was neutral . in wanting to generate results that could be compared with existing work, we followed the design of bradley and lang in their anew study , who used both cartoon figures in their self - assessment manikins and written ( spoken for anew ) instructions ; we departed only in orienting positive to the right .( as have many others , garcia _ et al ._ have used the anew study in their research . ) wk take pains to compare their scores with the anew study ( which they use in part as a control ) and other studies , finding their results are `` roughly equivalent '' ( p. 6 ) . and as we noted in , basic function words which are expected to be neutral such as `` the '' and `` of '' were appropriately scored as such , indicating that the survey mechanism we used was not adding a simple positive shift .third , given the nature of language and surveys and changing demographics online , an exact match for the medians would be a remarkable achievement . the agreement between the labmt ( our english word set ) and wk is still a strong one , and we show scatter plots for the matching word happiness scores in fig .[ fig : mhl - reply.wk - comp ] for labmt , anew , and wk .visually , we see the three studies are sympathetic with each other , particularly when we acknowledge the typical standard deviation for the scores of individual words ( on the order of 1 to 2 ) .we used reduced ( or standard ) major axis regression to obtain the fits shown in fig .[ fig : mhl - reply.wk - comp ] , .we see that anew s scores grow slightly faster than that of both labmt and wk ( = 1.08 and 1.07 ) and wk similarly relates to labmt ( ) .fourth , and rather finally , according to the argument of garcia _ et al ._ regarding faces , the median for anew should be higher than that of wk ( noting again that they used the same happy to sad directionality ) , yet we see the _ opposite _ ( 5.29 versus 5.44 ) . moreover, comparing medians alone is insufficient our regression analysis shows that , for the words they have in common , wk appears more emotionally biased than labmt with = 1.04 .we note that a much richer comparison could be carried out at the level of individual ratings , but this is far too detailed for the present response .we turn now to garcia _ et al ._ s central claim : that we claimed to find that a positivity bias is _ independent _ of word frequency across 10 languages .in fact , we instead variously stated that a positivity bias is `` strongly '' and `` largely '' independent of frequency , and we explored the minor departures from pure independence in detail for all 24 corpora across 10 languages ( see and the paper s online appendices ) .garcia _ et al ._ write that our paper specifically conflicts with two previous works , their own and that of warriner and kuperman .we are able to dismiss due to it being founded on a misapplication of an information theoretic formula by piantatosi et al . , and which we demonstrate elsewhere . setting aside this misrepresentation , garcia _ et al ._ s issue with our work becomes to what degree frequency independence is followed , and they provide an alternative analysis of how positivity behaves with usage frequency . whereas we performed the regression where is rank, they claim is more appropriate . once again, we stand by our own principled analysis for the following reasons .* 1 . mismatch of scored word list and word list with frequency : * in attempting to say anything about a given quality of words as it relates to usage frequency within a specific corpora , a complete census of words by frequency must be on hand ._ have taken our merged word lists for each language and applied them to data sets for which they do not necessarily fit .problematically , their word lists do not contain ranks , and consequently there are words missing in uncontrolled ways from the data they perform regression on . for the example of english , our 10,222 words will ( likely ) match the most common words in any sufficiently large english corpus .but the matching becomes more peculiar the rarer the word , and the inclusion of twitter in our word list means garcia _ et al ._ have found `` lolz '' , `` bieber '' and `` tweeps '' in google books . in fig . [fig : mhl - reply.jellyfish]a , we plot average happiness as a function of frequency of usage for the word list they created from google books .the scatter plot is clearly unsuitable for linear regression .we show an estimate of cumulative coverage at the bottom ( see caption ) , which crashes soon after reaching 5000 words . *rank is an appropriate variable to regress happiness ( or any word quality ) against : * garcia _ et al . _state that regression against frequency is a better choice because information is lost in moving to rank .however , the general adherence of natural language to zipf s law , , provides an immediate counterargument , even acknowledging the possibility of a scaling break .because word usage frequency is so variable , great care must be taken with any analysis . as we show for the case of english google books in fig .[ fig : mhl - reply.jellyfish]a , regression on will be gravely compromised by the increasing preponderance of words at lower frequencies ( a common issue with measuring power - law slopes ) , and , based on even the words for which coverage is reasonable , it would evidently be in poor judgment to extrapolate from any linear fit across frequencies .by contrast , fig .[ fig : mhl - reply.jellyfish]b shows how usage rank is perfectly suited for regression , and is the basis for the `` jellyfish '' plots we provided in fig . 3 of and in the paper s online appendices .our jellyfish plots make the general conformity to a rough ( we do not claim `` physical - law '' strict ) scale independence abundantly clear . by using rank, we are able to perform a much finer analysis than garcia et al .propose , and we show in all corpora that the deciles for a sliding window of 375 ranks changes at most rather slowly . finally , in fig .[ fig : mhl - reply.jellyfish]c , we present how behaves as a function of , illustrating both the error of choosing and that our results will be essentially unchanged if we regress against . in closing , we emphasize that minor deviations from frequency independence remain a secondary aspect of our observation that the polyanna hypothesis holds for a diverse set of languages , and are wholly irrelevant for the instrumental aspect of our work in creating text - based hedonometric tools .19ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) , ( ) , ( ) , `` , '' ( ) ( ) , * * , ( ) * * , ( ) , `` , '' ( ) , _ _ , ( , , ) * * ( ) ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) , _ _ , patterns ( , , ) | we demonstrate that the concerns expressed by garcia _ et al . _ are misplaced , due to ( 1 ) a misreading of our findings in ; ( 2 ) a widespread failure to examine and present words in support of asserted summary quantities based on word usage frequencies ; and ( 3 ) a range of misconceptions about word usage frequency , word rank , and expert - constructed word lists . in particular , we show that the english component of our study compares well statistically with two related surveys , that no survey design influence is apparent , and that estimates of measurement error do not explain the positivity biases reported in our work and that of others . we further demonstrate that for the frequency dependence of positivity of which we explored the nuances in great detail in garcia _ et al . _ did not perform a reanalysis of our data they instead carried out an analysis of a different , statistically improper data set and introduced a nonlinearity before performing linear regression . note : the present manuscript is an elaboration of our short reply letter . |
many components to build a high quality natural language processing system rely on the synonym extraction .examples include query expansion , text summarization , question answering , and paraphrase detection . although the value of synonym extraction is undisputed , manual construction of such resources is always expensive , leading to a low knowledgebase ( kb ) coverage . in the medical domain ,this kb coverage issue is more serious , since the language use variability is exceptionally high .in addition , the natural language content in the medical domain is also growing at an extremely high speed , making people hard to understand it , and update it in the knowledgebase in a timely manner . to construct a large scale medical synonym extraction system ,the main challenge to address is how to build a system that can automatically combine the existing manually extracted medical knowledge with the huge amount of the knowledge buried in the unstructured text . in this paper , we construct a medical corpus containing 130 m sentences ( 20 gigabytes pure text ) .we also construct a semi - supervised framework to generate a vector representation for each medical term in this corpus .our framework extends the word2vec model by integrating the existing medical knowledge in the model training process . to model the concept of synonym ,we build a `` * * concept space * * '' that contains both the semi - supervised term embedding features and the expanded features that capture the similarity of two terms on both the word embedding space and the surface form .we then apply a linear classifier directly to this space for synonym extraction .since both the manually extracted medical knowledge and the knowledge buried under the unstructured text have been encoded in the concept space , a cheap classifier can produce satisfying extraction results , making it possible to efficiently process a huge amount of the term pairs .our system is designed in such a way that both the existing medical knowledge and the context in the unstructured text are used in the training process .the system can be directly applied to the input term pairs without considering the context .the overall contributions of this paper on medical synonym extraction are two - fold : * from the perspective of applications , we identify a number of unified medical language system ( umls ) relations that can be mapped to the synonym relation ( table [ tbl : synonymrelations ] ) , and present an automatic approach to collect a large amount of the training and test data for this application .we also apply our model to a set of 11b medical term pairs , resulting in a new medical synonym knowledgebase with more than 3 m synonym candidates unseen in the previous medical resources . * from the perspective of methodologies , we present a semi - supervised term embedding approach that can train the vector space model using both the existing medical domain knowledge and the text data in a large corpus .we also expand the term embedding features to form a concept space , and use it to facilitate synonym extraction .the experimental results show that our synonym extraction models are fast and outperform the state - of - the - art approaches on medical synonym extraction by a large margin .the resulting synonym kb can also be used as a complement to the existing knowledgebases in information extraction tasks .a wide range of techniques has been applied to synonym detection , including the use of lexicosyntactic patterns , clustering , graph - based models and distributional semantics .there are also efforts to improve the detection performance using multiple sources or ensemble methods .the vector space models are directly related to synonym extraction .some approaches use the low rank approximation idea to decompose large matrices that capture the statistical information of the corpus .the most representative method under this category is latent semantic analysis ( lsa ) .some new models also follow this approach like hellinger pca and glove .neural network based representation learning has attracted a lot of attentions recently .one of the earliest work was done in .this idea was then applied to language modeling , which motivated a number of research projects in machine learning to construct the vector representations for natural language processing tasks . following the same neural network language modeling idea ,word2vec significantly simplifies the previous models , and becomes one of the most efficient approach to learn word embeddings . in word2vec , there are two ways to generate the `` input - desired output '' pairs from the context : `` skipgram '' ( predicts the surrounding words given the current word ) and `` cbow '' ( predicts the current word based on the surrounding words ) , and two approaches to simplify the training : `` negative sampling '' ( the vocabulary is represented in one hot representation , and the algorithm only takes a number of randomly sampled `` negative '' examples into consideration at each training step ) and `` hierarchical softmax '' ( the vocabulary is represented as a huffman binary tree ) .so one can train a word2vec model from the input under 4 different settings , like `` skipgram''+negative sampling " .word2vec is the basis of our semi - supervised word embedding model , and we will discuss it with more details in section [ sec : conceptembedding ] .our medical corpus has incorporated a set of wikipedia articles and medline abstracts ( 2013 version ) .we also complemented these sources with around 20 medical journals and books like _ merck manual of diagnosis and therapy_. in total , the corpus contains about 130 m sentences ( about 20 g pure text ) , and about 15 m distinct terms in the vocabulary set .a significant amount of the medical knowledge has already been stored in the unified medical language system ( umls ) , which includes medical concepts , definitions , relations , etc .the 2012 version of the umls contains more than 2.7 million concepts from over 160 source vocabularies .each concept is associated with a unique keyword called cui ( concept unique identifier ) , and each cui is associated with a term called preferred name .the umls consists of a set of 133 subject categories , or semantic types , that provide a consistent categorization of all cuis .the semantic types can be further grouped into 15 semantic groups .these semantic groups provide a partition of the umls metathesaurus for 99.5% of the concepts .domain specific parsers are required to accurately process the medical text .the most well - known parsers in this area include metamap ( aronson , 2001 ) and medicalesg , an adaptation of the english slot grammar parser to the medical domain .these tools can detect medical entity mentions in a given sentence , and automatically associate each term with a number of cuis .not all the cuis are actively used in the medical text .for example , only 150k cuis have been identified by medicalesg in our corpus , even though there are in total 2.7 m cuis in umls .synonymy is a semantic relation between two terms with very similar meaning .however , it is extremely rare that two terms have the exact same meaning . in this paper ,our focus is to identify the near - synonyms , i.e. two terms are interchangeable in some contexts .the umls 2012 release contains more than 600 relations and 50 m relation instances under 15 categories .each category covers a number of relations , and each relation has a certain number of cui pairs that are known to bear that relation . from umls relations , we manually choose a subset of them that are directly related to synonyms , and summarize them in table [ tbl : synonymrelations ] . in table[ tbl : synonymexamples ] , we list several synonym examples provided by these relations ..umls relations corresponding to the synonym relation , where `` ro '' stands for `` has relationship other than synonymous , narrower , or broader '' , `` rq '' stands for `` related and possibly synonymous '' , and `` sy '' stands for `` source asserted synonymy '' .[ cols="^,^",options="header " , ] our method was very scalable .it took on average several hours to generate the word embedding file from our medical corpus with 20 g text using g cpus and roughly 30 minutes to finish the training process using one cpu . to measure the scalability at the apply time, we constructed a new medical synonym knowledgebase with our best synonym extractor .this was done by applying the concept space model trained under the neg+skip setting to a set of 11b pairs of terms .all these terms are associated with cuis , and occur at least twice in our medical corpus .this kb construction process finished in less than 10 hours using one cpu , resulting in more than 3 m medical synonym term pairs . to evaluate the recall of this knowledgebase, we checked each term pair in the held out synonym dataset against this kb , and found that more than of them were covered by this new kb .precision evaluation of this kb requires a lot of manual annotation effort , and will be included in our future work . in section [ sec : featureexpansion ] , we expand the raw features with the matching features and several other feature expansions to model the term relationships . in this section , we study the contribution of each individual feature to the final results .we added all those expanded features to the raw features one by one and re - ran the experiments for the concept space model .the results and the feature contributions are summarized in table [ tbl : featurecontributions ] .the results show that adding matching features , `` sum '' and `` difference '' features can significantly improve the scores .we can also see that adding the last two feature sets does not seem to contribute a lot to the average score .however , they do contribute significantly to our best score by about .in this paper , we present an approach to construct a medical concept space from manually extracted medical knowledge and a large corpus with 20 g unstructured text .our approach extends the word2vec model by making use of the medical knowledge as extra label information during the training process .this new approach fits well for the medical domain , where the language use variability is exceptionally high and the existing knowledge is also abundant .experiment results show that the proposed model outperforms the baseline approaches by a large margin on a dataset with more than one million term pairs .future work includes doing a precision analysis of the resulting synonym knowledgebase , and exploring how deep learning models can be combined with our concept space model for better synonym extraction .r. collobert and j. weston . a unified architecture for natural language processing : deep neural networks with multitask learning . in _ proceedings of the 25th international conference on machine learning _ , 2008 .x. glorot , a. bordes , and y. bengio .domain adaptation for large - scale sentiment classification : a deep learning approach . in _ proceedings of the 28th international conference on machine learning _ , 2011 .a. henriksson , m. conway , m. duneld , and w. w. chapman .identifying synonymy between snomed clinical terms of varying length using distributional analysis of electronic health records . in _ proceedings of amia annual symposium ._ , 2013 .a. henriksson , m. skeppstedt , m. kvist , m. conway , and m. duneld .corpus - driven terminology development : populating swedish snomed ct with synonyms extracted from electronic health records . in _ proceedings of bionlp _ , 2013 .p. huang , x. he , j. gao , l. deng , a. acero , and l. heck .learning deep structured semantic models for web search using clickthrough data . in _ proceedings of the acm international conference on information and knowledge management ( cikm ) _ , 2013 .t. mikolov , k. chen , g. corrado , and j. dean .efficient estimation of word representations in vector space . in _ proceedings of the workshop at international conference on learning representations _ , 2013 .y. peirsman and d. geeraerts .predicting strong associations on the basis of corpus data . in _ proceedings of the 12th conference of the european chapter of the association for computational linguistics _ , 2009 .r. socher , c. c. lin , a. ng , and c. d. manning .parsing natural scenes and natural language with recursive neural networks . in _ proceedings of the 28th international conference on machine learning ( icml ) _ , 2011 . | in this paper , we present a novel approach for medical synonym extraction . we aim to integrate the term embedding with the medical domain knowledge for healthcare applications . one advantage of our method is that it is very scalable . experiments on a dataset with more than 1 m term pairs show that the proposed approach outperforms the baseline approaches by a large margin . |
our goal is to identify functional properties of nodes based on the network structure .connection between network structure and its functionality is important , many attempts were made to find functional signatures in the network structure , such as , for a review see . as tagging network nodes and edges with functional attributesdepends on external information and is not a completely unique procedure , the original problem needs reformulation which is tractable with graph - theoretical tools .the function real - world networks perform constrains their structure . yet , one often has more detailed information about the network structure than about the functions it may perform .we focus on systems , either natural or artificial , which process signals and are comprised of many interconnected elements . from a signal processing point of view, global information about network structure is encoded in the shortest paths , i.e. if signal processing is assumed to be fast , most of network communication is propagated along the shortest paths . therefore global and local properties of shortest paths are relevant for understanding organisation of the signal processing in the system represented with a suitable network . during signal transmission ,signals are being spread and condensed in the nodes , as well as along network edges .we have previously shown that in case of cerebral cortex , using a simplified version of the convergence degree ( cd ) , it was possible to connect structural and functional features of the network .in complex networks , signal processing characteristics are also determined by the level of network circularity ( which in biology and especially neural science is known as reverberation , for obvious reasons ) .possibility to go around _ chordless _ circles necessitates simultaneous quantification of signal condensing , spreading along network edges and edge circularity .here we generalise edge convergence and divergence , and take into account the existence of circles in the network , treating their effects separately from the effect of branching .for that reason we refine the definition of edge convergence and introduce the overlapping set of an edge , both notions are to be defined in a precise manner later in the text .our approach may be viewed as generalisation of in- , out and strongly connected components of a graph to the level of network edges .notions introduced have an extra gain , they help clarifying the otherwise murky notion of network causality. the functional role of a node in a network is defined by the amount of information it injects to or absorbs from the system , or passes on to other nodes . in case of real - world networkswe test our findings using external validation , given the existing body of knowledge about each specific network .we illustrate the advantage of edge - based approach with the case of strongly connected graphs , where edge - based measures offer deeper understanding of signal processing and transmitting roles of nodes than an analysis which concentrates solely on nodes and their properties .measures we work with are applicable to networks of all sizes , there is no assumption about `` sufficient '' network size .more precisely , networks we work with can be small , and applicability to large networks is limited only by the computational capacity needed to find all shortest paths in the network .the semantics of our approach is tailored to explain signal flow , though our methodology is applicable to directed networks in general . in cases of information processing , regulatory , transportation or any other networkthe appropriate semantics of the approach has to be given . in section [sec : notion ] we introduce the notions of convergence degree and overlapping set , in section [ sec : nrr ] we define the flow representation , in section [ sec : res ] we analyse four real - world networks and discuss signal transmission , processing and control properties of the small - world networks .we compute cd - s and ( nontrivial ) overlap probability distributions for three model networks . in the last sectionwe discuss our results and draw conclusions .convergence degree was introduced in for the analysis of cortical networks and was applied to some random networks .we modify the measure introduced therein , in order to capture the structure of shortest paths in a more detailed way .we will discuss both global and local properties of the shortest paths , relevant notions will be distinguished with self explanatory indices and respectively .let be the set of all the shortest paths in the graph .for any edge we can choose a subset comprised of all the shortest paths which contain the chosen edge . uniquely determine two further sets : the set of all the nodes from which the shortest paths in originate , and the set of all the nodes in which the shortest paths in terminate . by definitionwe assume that node is in and node is in .we define a third set , , the intersection of - and sets and call it the overlapping set .we note that ( , respectively ) is the edge - level equivalent of the in - component ( out - component , respectively strongly connected component ) of the directed network , introduced in and later refined by .notions relevant for understanding the convergence degree and overlapping set are shown in figure [ fig : cd1 ] . .global sets are displayed as shaded regions , local sets are comprised of first in - neighbours of node and first out - neighbours of node inside the shaded regions , with the exception of node , which is contained in the local and global overlap of and .note the omition of points and from the global input and output sets . ] from the perspective of the chosen edge , the whole network splits to two , possibly overlapping sets , both of which have rich structure .shortest paths induce natural stratification on the set , nodes at distance 1 , 2 and so on from the node are uniquely determined .points at distance from the tail form the -th stratum of .each point in the -th stratum is a tail of an edge with a head in the -th stratum .edges connecting -th stratum with any stratum are prohibited .edges from the strata to the strata are prohibited , since those would alter the shortest paths between the sets .the set is stratified in a similar fashion .points in the intersection of with inherit both stratifications .stratification of and sets is illustrated in figure [ fig : strata ] ., output strata are labelled with indices and overlap strata have double indices .examples of prohibited edges are shown with dashed lines , necessary edges are shown with full line .strata and are connected with the edge itself and they do not overlap . ] local versions of these sets are defined as follows : is the set of all the first predecessors of the node , while is the set of first successors of the node . when indices or are omitted , either is used .if the graph has circles , and sets may overlap , thus it makes sense to introduce strict and sets , which are defined as follows : , , and are generalisations of the notion of first predecessors and successors of a node , and accordingly , cardinalities of these sets are generalisations of the in- and out - degrees of nodes .we note that global and local versions of the , and overlapping sets are two extremes of two set families defined as follows .let be the set of points from which paths at distance less or equal to from the point begin , analogously let be the set of points at which paths at distance less or equal to from the point terminate .the two sets are balls centred at and with radii and . instead of balls , one may consider the surfaces of the balls , in which case points at distances and are considered .the global -set is thus , whilst the local -set corresponds to points at surfaces with radii 1 , .the notion of strict in- , out- and overlapping sets is important for understanding causality relations in network systems .global signal flow through an edge induces separation of network nodes into four classes : 1 . , in which are the causes of the flow . , in which the effects of flow are manifested .3 . the overlap , whose elements represent neither cause nor effect . relation between elements in the overlapis often described as circular- or network causality .points which are not members of form the remaining , fourth category which has no causal relationship with the signal flowing through the given edge .we stress that for a generic graph no such partition is possible based on node properties .e.g. if we tried to define analogous notions based on node properties , all analogue node classes would coincide for the case of strongly connected graphs .the and sets would coincide , and all distinction between different node classes would have been lost . for each edgewe define three additional measures , namely the relative size of the strict in - set , the relative size of the strict out - set , and the relative size of the overlap between in - set and out - set , as follows : where denotes the cardinality of the set .note that equation [ eq : uc3 ] is the jaccard coefficient of the and sets .it is possible to generate networks which have edges with large global overlaps , one simply adds randomly a small number of edges to an initial oriented circle .this example helps understanding the meaning of ( possibly large ) global overlaps : they are characteristic of edges in chordless circles . more precisely , for and edge to have a nonempty overlapping set it is necessary , but not sufficient , to be on a chordless circle of length at least three .we illustrate this by an example . in the graph shown in figure[ fig : circ ] , the only edge with nonempty overlapping set is , with . is on the chordless circle ( 3,1,2,3 ) , whilst the edges and on the same chordless circle have zero overlapping sets .local overlaps are related to the clustering coefficient of the graph , since they define the probability that the vertices in the neighbourhood of a given vertex are connected to each other .overlap represents global mutual relationship and a measure of dependence ( in terms of chordless circles ) between - and sets .this dependence is inherent in the network structure .large jaccard coefficient of the and sets is not detectable with edge betweenness , as it may obtain large values for edges with non - overlapping sets .the edge convergence degree of the edge is defined as follows : note that the definition of cd uses the normalised sizes of the strict - and -sets to make the measure independent of the network size .furthermore , this formula is related to the complement of the jaccard coefficient ( denoted as ) of the - and -sets , or equivalently to their normalised set - theoretic difference , thus connecting the cd to information theoretical quantities .the following inequality is obvious : latexmath:[\[\begin{aligned } directionality of the edge gives meaning to cardinality substraction , as and sets can be distinguished .if the cd value is close to one , the signal flow through the edge is originating from many sources and terminating in very few sinks , while cd values close to -1 indicate flow formed of few sources and many sinks .this property justifies rough division of edges according to their cd properties to convergent ( condensing ) , balanced and divergent ( spreading ) .an oriented circle with at least three nodes has the maximum possible global overlap for each edge , while the absolute value of the global is the smallest possible , in accordance with the inequality ( [ eq : cd_le_ovl ] ) .we note that cd in an oriented chain monotonously decreases along the chain , whilst the overlap is zero along the chain .this simple example again illustrates how cd and overlap are sensitive to the network topology .applicability of the convergence degree is limited by the following facts .definition of convergence degree makes sense only if not all connections are reciprocal , stated otherwise if there is a definite directionality in the network. if every connection is reciprocal , the network may be considered unoriented .for fully reciprocal networks , the and sets would coincide .second , convergence degree makes sense for a network which is at least weakly connected .since the number of edges exceeds the number of nodes in a typical connected network , and in many cases we are interested in the role of individual nodes , it is desirable to condense the our primarily edge - based measures to a node - centric view .the condensed view should reveal several features of interest : local vs global signal processing properties of network nodes , directionality of the information , i.e. whether we are interested in the properties of the incoming or outgoing edges , the third aspect is the statistics , i.e. total or average property of the edges , and finally we may choose edges according to the sign of their cd . condensingthe information about overlapping sets follows the same lines , with the exception of the sign .we proceed by an example and introduce the following six quantities defined for each node .let denote the sum of all incoming negative local convergence degrees divided by the node s in - degree , and let denote the sum of all incoming positive convergence degrees divided by the node s in - degree , i.e. is the average negative inwards pointing local cd of the node . in a similar way we can also define and for outgoing convergence degrees . for claritywe give formulae for and . and denote in - degree and out - degree of the node , is the unit step function continuous from the left . denotes the first in - neighbours of the node , the analogous notation is selfexplanatory . we also define , the sum of all incoming local overlaps and , the sum of all outgoing local overlaps each being normalised with the corresponding node degree . factors before the sums serve normalisation purposes , each should have a value within the $ ] interval .these quantities are average local cd - s and relative overlaps corresponding to each node .one is also interested in the total of the in- and out pointing edges of a given cd sign , and define the corresponding version of the node - reduced convergence degree . for normalisation purposes the sums in s are divided by , the maximal possible number of the outgoing ( incoming ) connections a node can have , where denotes the number nodes in the network .thus , using the quantities and one can construct four different cd flow representations of a network , namely , , and .the incoming node - reduced cd values are understood as coordinates of the axis , while the outgoing cd values are interpreted as the coordinates of the axis . in order to display overlaps together with the convergence degrees in a single figure ,overlaps are treated as the coordinates of the axis , the incoming overlaps being positive and the outgoing understood negative .each point is represented in each octant of the flow representation .the points in the plane are not independent , given the values in the diagonal quadrants , the other two quadrants can be reconstructed with reflections . representation of graph nodes in the plane is related to the cd flow through the nodes in the following way .the cd flow through the node is defined as follows : the first sum is equal to , where is the appropriate weight , whilst the second sum equals .the flow can be rewritten as if the first difference on the right hand side of equation ( [ eq : flow_2 ] ) is large ( small ) , i.e. the representative point is close to the diagonal and is far from the origin in the top left ( bottom right ) quadrant , and the second difference is small ( large ) , i.e. the representative point is close to the diagonal and is far from the origin in the bottom right ( top left ) quadrant , the node is _ source _ ( _ sink _ ) of the cd flow .analogously , the cd flow can be written as : where the two differences determine the router characteristics of the node . in this sense flow representationis a means to independently study different components of the cd flow .different circles may have common nodes , thus the overlap flow defines whether different circles passing through the given node have more common parts after of before the given node , i.e. whether a node is a source or sink of circularity .precise meaning of large and small depends on the criteria used to classify the representative points of the node - reduced representation .nodes can be classified based on the cd ( relative overlap ) flow , besides distinction based on the sign , the scale is continuous , there is no a - priori grouping of nodes .further classification can be made based on the structure of the cd ( relative overlap ) flow , i.e. based on properties of different terms defining the cd ( relative overlap ) flow .components of the flow representation for two toy graphs are shown in figure [ fig : lepke ] .we observe that same nodes may be global , but not local cd flow sinks or sources . , the right column represents graph nodes with .every overlapping set is empty for the lower graph , because all chordless circles are of length two .some points have the same coordinates in the flow representation .e.g. , point d is is global , but not local cd flow sink . ]each octant represents different aspect of convergence - divergence relations in the network .these quantities bring us to the actual interpretation of edge convergence and divergence as a characterisation of signal flow on the nodes of a network . to make statements about the signal flow derived from the cd flow, we have to make an inversion of properties , as nodes which behave as a sink of convergence , actually inject information to the network , thus they are sources of signal . respectively ,cd sources are sinks of signal . assuming this interpretation we can extract useful information from the flow representation regarding the signal processing roles of nodes in the network . nodes which have incoming edges with cardinalities of the ( ) being larger than cardinalities of the ( ) , and outgoing edges with cardinalities of the ( ) being larger than cardinalities of the ( ) are , from the signal processing perspective , identified as sources of signals .the combination of divergent input ( negative incoming cd sum ) and convergent output ( positive outgoing cd sum ) is , considering the signal flow , equivalent to absorption of signals in the network .this is represented in the top left quadrant of the plane . on the opposite , the combination of convergent input and divergent output corresponds to the source characteristics of the nodes ( bottom right quadrant of the plane ) .the top right and bottom left quadrants can be interpreted as a display of _ signed _ relay characteristics of the nodes .nodes which have incoming edges with cardinalities of the ( ) being larger than cardinalities of the ( ) , and outgoing edges with cardinalities of the ( )being larger than cardinalities of the ( ) , are called negative ( positive ) router nodes .at the same time routing characteristics can be read from the top right and bottom left quadrants .redistribute _ incoming cd of a given sign to outgoing cd of the _ same _ sign .additional information is obtained from the coordinate , which gives the average overlap of incoming and respectively , outgoing edges .this quantity identifies the degree of a node s participation in signal circulation in the network , a property typically associated with control circuits .graphical presentation of a network is not unique , e.g. isomorphic graphs may look totally different , the petersen graph being a typical example .community structure is not unique , grouping of points , thus presenting a network can be achieved in a multitude of ways . yet , the flow representation of a network is _ unique _ , though due to possible symmetries it may have a significant amount of redundancy .this 3d plot of the network is unique in the sense that there is no arbitrariness in the position of the points in the three dimensional space .the flow representation can be considered as a network fingerprint since isomorphic graphs are mapped to the same plot , and differences between flow representations can be attributed to structural and functional properties of the network .if all edges are reciprocal or the graph is undirected , the flow representation of the network shrinks to a single point .the same argument applies to all graphs in which some nodes can not be distinguished due to symmetries .more precisely , nodes in the orbit of an element generated by the automorphism group of the graph are represented with the same point on the flow representation , as all the value of -s are constants on the orbits generated by the automorphism group of the graph .usefulness and application of the flow representation will be illustrated in the analysis of the real - world networks in section [ sec : real_net ] .we calculate cd - s for three model networks and analyse cd - s of four real - world networks . in this sectionwe analyse functional clusters in real - world networks and the statistical properties of their interconnection .we analysed two biological and two artificial networks : macaque visuo - tactile cortex , signal - transduction network of a ca1 neuron , the call graph of the linux kernel version 2.6.12-rc2 , and for comparison purposes the street network of rome .nodes and edges are defined as follows : in the macaque cortex nodes are cortical areas and edges are cortical fibres , in the signal - transduction network nodes are reactants and edges are chemical reactions , in the call graph nodes are functions and edges are function calls , in the street network the nodes are intersections between roads and edges correspond to roads or road segments .the first three networks perform computational tasks , linux kernel manages the possibly scarce computational resources , signal - transduction network can be considered as the operating system of a cell , while cortex is an ubiquitous example of a system which simultaneously performs many computationally complex tasks .the street network is an oriented transportation network , which has a rich structure , as its elements have traffic regulating roles .the call graph of the linux kernel was constructed in the following way .we created the call graph of the kernel source which included the smallest number of components necessary to ensure functionality .the call graph was constructed using the codeviz software , but it was not identical to the actual network of the functions calling each other , because the software detects only calls that are coded in the source and not the calls only realized during runtime .the resulting call graph had more than vertices .as we wanted to perform clustering and statistical tests , the original data was prohibitively large , therefore we applied a community clustering algorithm to create vertex groups .we generated a new graph in which the vertices represented the communities of the original call graph and have added edges between vertices representing communities whenever the original nodes in the communities were connected by any number of edges .definition of the call graph nodes and their connections is analogous to the nodes and connections of the cortical network , as millions of neurons form a cortical area , and two areas are considered to be connected if a relatively small number of neurons in one area is connected to a small number of neurons in another area .the call graph of the linux kernel will be discussed in section [ sec : aggreg_netw ] .flow are shown in the left column , components of the average are shown in the right column . displayed are : erds - rnyi graph ( row a ) , macaque visuo - tactile cortex ( row b ) and signal - transduction ( row c ) .relative overlap flow is indicated by colour intensity . ]the flow representations of two real - world networks are shown in figure [ fig:3lepke ] and for comparison , in part a , the erds - rnyi network .we can identify the most important nodes and some general features of the networks as follows .part b refers to the macaque visuo - tactile cortex .it is characterised by the alignment of the nodes along a straight line along the main diagonal , and hyperbolic - like pattern in the first and third quadrants , showing reverse ordering in the opposite quadrants , and absence of routers , which refers to a hierarchical organisation . in part cone can see the signal - transduction network of a hippocampal neuron . in the signal - transduction network of the hippocampal neurons ,the molecules with the most negative cd flow are involved , among other functions , in the regulation of key participants of the signal transduction cascade such as the camp second messengers .molecules with large positive cd flow play function in cell survival and differentiation , as well as apoptosis .router - like proteins are involved in diverse functions , notably the regulation of synaptic transmission in addition to those mentioned above .however , it should be noted that partly because of the paucity of our knowledge about many of the components of this network , as well as because of redundancy , i.e. overlapping functionality , we could give here only a very superficial classification .all edges of the signal transduction network fall in one of the three classes : excitatory , inhibitory and neutral , .cd and overlap data were unrelated to the inhibitory , excitatory or neutral nature of network edges .empirical distributions of cd - s and overlaps were alike for each edge class , see figure [ fig : stn_edges ] in the appendix .we have analysed the flow representations in order to identify different features of signal processing .network nodes are points represented in a 6d space of the flow representation , and in order to identify different signal processing , transmitting and controlling groups of nodes we performed clustering using gaussian mixture and bayesian information criterion implemented in r .we wish to stress that the clustering we performed is not a form of community detection , but grouping of nodes with respect to their functional signal processing properties .community detection can identify dense substructures , but it provides no information about the nature of signal processing , transmission or control . in each networkwe determined local and global , total and average signal processing clusters , have determined their properties , and have analysed the nature of cd - s and relative overlaps within and between clusters .clustering of nodes with respect to their functional properties resulted in contingency tables , with clusters being labels of the contingency table , and entries in the contingency able being numbers of edges within and between respective clusters . to estimate the randomness of the contingency tables we performed monte carlo implementation of the two sided fisher s exact test .number of replicates used in the monte carlo test was in each case .the exact fisher s test characterises the result of the clustering procedure , it quantifies how much the distribution of edges within and between clusters differ .we summarise the results in table [ tab : eft ] . for comparison purposes benchmark graphswere generated using algorithms described in ..number of functional clusters ( ) and the corresponding -values calculated using fisher s exact test of the contingency tables .q denotes the modularity of the community structure .two numbers in a single cell denote the first two moments derived from sample size of 100 graph instances .networks are denoted as follows : vtc - macaque visuo - tactile cortex , stn - signal - transduction network of the hippocampal ca3 neuron , kernel - call - graph of the linux kernel , rome - rome street network , er - erds - rnyi graphs and bench - benchmark graphs .numbers were rounded to minimise the table size .definitions of aggregated networks are given in section [ sec : aggreg_netw ] . [ cols="<,^,^,^,^,^,^",options="header " , ] [ tab : eft_2 ] empirical distributions of cd - s and relative overlaps over the excitatory , inhibitory and neutral edge classes in the signal transduction network are shown in figure [ fig : stn_edges ] .authors are grateful to tams nepusz , lszl ketskemty , lszl zalnyi , balzs ujfalussy , gerg orbn and zoltn somogyvri for useful discussions . 99 amaral lan , scala a , barthlmy m and stanley he 2000 _ proc .usa _ * 97 * 11149 bnyai m , nepusz t , ngyessy l and bazs f 2009 _ proc .7-th int . symp . on intelligent systems and informatics ( subotica )_ 241 ben - naim e and krapivsky pl 2009 _ j. phys . a : math .theor . _ * 42 * 475001 dorogovtsev sn , mendes jff and samukhin an 2001 _ phys .e _ * 64 * 025101(r ) erds p and rnyi a 1959 _ publ .debrecen _ * 6 * 290 http://freshmeat.net/projects/codeviz/ ingram pj , stumpf mph and stark j 2006 _ bmc genom . _ * 7 * 108 jaccard p 1901 _ bull .de la soc .nat . _ * 37 * 547 http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.12-rc2/ lancichinetti a and fortunato s 2009 _ phys . rev .e _ * 80 * 016118 luo j and magee cl 2011 _ complexity _ doi 10.1002/cplx.20368 maayan a , jenkins sl , neves s , hasseldine a , grace e , dubin - thaler b , eungdamrong nj , weng g , ram pt , rice jj , kershenbaum a , stolovitzky ga , blitzer rd and iyengar r 2005 _ science _ * 309 * 1078 ngyessy l , nepusz t , kocsis l and bazs f 2006 _ eur . j. neurosci . _* 23 * 1919 ngyessy l , nepusz t , zalnyi l and bazs f 2008 _ proc .b _ * 275 * 2403 newman mej , strogatz sh and watts dj 2001 _ phys* 64 * 026118 newman mej 2003 _ siam rev . _* 45 * 167 pons p and latapy m 2005 _ computer and information sciences - iscis 2005 ( istanbul ) _ vol .( berlin / heidelberg : springer ) 284 http://r-project.org http://www.dis.uniroma1.it//data/rome/rome99.gr stelling j , klamt s , bettenbrock k , schuster s and gilles ed 2002 _ nature _ * 420 * 190 sporns o , 2002 pp .171 - 186 , graph theory methods for the analysis of neural connectivity patterns , ed r ktter _ neuroscience databases. a practical guide _( boston , ma : kluwer ) | confining an answer to the question whether and how the coherent operation of network elements is determined by the the network structure is the topic of our work . we map the structure of signal flow in directed networks by analysing the degree of edge convergence and the overlap between the in- and output sets of an edge . definitions of convergence degree and overlap are based on the shortest paths , thus they encapsulate global network properties . using the defining notions of convergence degree and overlapping set we clarify the meaning of network causality and demonstrate the crucial role of chordless circles . in real - world networks the flow representation distinguishes nodes according to their signal transmitting , processing and control properties . the analysis of real - world networks in terms of flow representation was in accordance with the known functional properties of the network nodes . it is shown that nodes with different signal processing , transmitting and control properties are randomly connected at the global scale , while local connectivity patterns depart from randomness . grouping network nodes according to their signal flow properties was unrelated to the network s community structure . we present evidence that signal flow properties of small - world - like , real - world networks can not be reconstructed by algorithms used to generate small - world networks . convergence degree values were calculated for regular oriented trees , and its probability density function for networks grown with the preferential attachment mechanism . for erds - rnyi graphs we calculated both the probability density function of convergence degrees and of overlaps . |
when considering the dispersion of dielectric media , text book analysis typically concerns itself with sums of lorentzian functions . while it can be argued that sums of lorentzians are physically reasonable response functions for a number of systems , in particular those described by the lorentz - model , it is well known that the only restrictions imposed by causality are those implied by the kramers - kronig relations . in light of the variety in the electromagnetic responses offered by natural media and metamaterials ,it is of interest to consider the possible gap between the function space consisting of sums of lorentzians , and the space consisting of functions satisfying the kramers - kronig relations . to this end, it is here demonstrated that any complex - valued function satisfying the kramers - kronig relations can be approximated as a superposition of lorentzian functions , to any desired accuracy .it therefore follows that the typical analysis of causal behavior in terms of lorentzian functions for dielectric or magnetic media encompasses the whole space of functions obeying the kramers - kronig relations .these results therefore serve to strengthen the generality of the typical analysis of causality .two examples where the response functions do not resemble typical lorentzian resonance behavior shall here be expressed as lorentzian superpositions in order to demonstrate the above findings .section [ sec : nies ] considers a steep response function which results in a susceptibility for arbitrarily low loss or gain , and sec .[ sec : perfectlense ] considers an optimal perfect lens response over a bandwidth .the precisions in both superpositions are shown to become arbitrarily accurate as the parameters are chosen appropriately .a natural consequence of such superpositioning is that lorentzian functions can be viewed as general building blocks for engineering causal susceptibilities in metamaterials .considering that systems such as the pioneering split - ring resonator implementation , and others , have demonstrated several ways of realizing and tailoring lorentzian responses , this may prove to be a promising approach .while literature has so far tended to focus on specific metamaterial designs , a number of desired responses have emerged for which few physically viable systems are known .one such set of response functions are those with desired dispersion properties which are relevant for applications such as dispersion compensation , couplers , antenna design , filters , broadband absorption and broadband ultra - low refractive index media .this leads to the following question : starting with a target response , how can one realize an approximation of it ? towards this end , it has been proposed to use layered metamaterials .our article considers more generally the possibility of engineering artificial response functions through the realization of a finite number of lorentzians ; a method which may be applicable to a variety of metamaterials . on this note sec .[ sec : tailorsplitring ] addresses how lorentzian superposition responses can be realized through the arrangement of split ring cylinders of different radii and material parameters .finally sec .[ sec : errorestimate ] derives an estimate of the error that arises when a target response is approximated by a finite sum of lorentzian functions .the following section will set out the main results of this article while leaving detailed calculations to later sections and appendices .a lorentzian function can be written in the form where is the resonance frequency , is the frequency , and is the bandwidth. it may be demonstrated that the imaginary part of approaches a sum of two dirac delta - functions with odd symmetry as when .this is exemplified in fig .[ fig : lorconvergence ] and proven in appendix [ sec : deltaconv ] . a goal function , such as the imaginary part of a susceptibility , may therefore be expressed this limit integral expression may then be approximated by the sum approaching a sum of two delta - functions for decreasing values of .,scaledwidth=44.0% ] where , is the resolution along the integration variable , and is a large integer .this sum , which shall be designated , can be made to approximate to any desired degree of accuracy .more precisely , the involved error is shown to converge to zero in both and : where is chosen suitably , e.g. ( appendix [ sec : l2conv ] ) .considering , one may define a function where now both real and imaginary parts of the lorentzians are superposed . from the kramers - kronig relationsone then has where represents the hilbert transform and the real part of . since the hilbert transform preserves the energy , or -norm , it therefore follows that when given . on the basis of this, it follows that meaning that both real and imaginary parts of are approximated by the summation of lorentzians .the terms are weighted by at each resonance frequency according to .. combined with therefore becomes the central result of this article .noting that is itself a riemann sum , it follows that the limit integral expression may be written on the basis of the preceding arguments .therefore approximate the space of functions satisfying the kramers - kronig relations as superpositions of lorentzians to any degree of precision . in the following sectionthis result shall be demonstrated with two examples .the validity of requires that is analytic on the real axis ( not only in the upper half - plane ) . in the event of non - analytic susceptibilities on the real axis , however , all problems are bypassed by instead evaluating along the line before approximating by lorentzians .here is an arbitrarily small parameter . since is analytic there , is valid .furthermore , since almost everywhere as , the representation can be made arbitrarily accurate , meaning that any can be approximated to any precision .in fact , this also includes media that do not strictly obey the kramers - kronig relationship due to singularities on the real axis , such as the ideal plasma .it is possible to achieve with arbitrarily low loss or gain .consider a susceptibility with as a result of the infinite steepness at , the hilbert transform gives .it follows that it is possible to scale to make the magnitude arbitrarily small for all frequencies while maintaining .inserting as in gives .\label{eq : niesint}\end{aligned}\ ] ] one can show that as , the imaginary part of yields . using one may likewise approximate the response as a finite sum of lorentzians : fig .[ fig : nies ] plots the real and imaginary parts of as approximated by both sum and integral expressions , and respectively , where is chosen equal to , , and , and where .one observes that the sum falls in line with the integral result . for onehas . as , meaning that the drop in the approximated curves of at become infinitely steep, one has that in both cases .the imaginary parts for one out of every 5 lorentzians in the sum are displayed in fig .[ fig : summedlor ] for .+ it has been shown in that the resolution , , for a metamaterial lens of thickness surrounded by vacuum , is given by for a one dimensional image object . here , is either the electric or magnetic susceptibility depending on the polarization of the incident field .it follows that any perfect lens should approximate over a bandwidth .a system approaching such an optimum is displayed in fig .[ fig : perflens ] .there has been placed a strong lorentzian resonance at ( out of view ) and a weak , slowly varying function around .taking the absolute value gives the black dashed curve in fig .[ fig : absperflenssum ] , which reveals that remains small and constant over the bandwidth .+ + + fig .[ fig : perflenssum ] represents a sum of lorentzians over the interval where and . the goal function for has been found by subtracting the strong resonance situated at from in fig .[ fig : perflens ] .the strong resonance has then later been added to the sum of lorentzians .the absolute value of the real and imaginary parts in fig .[ fig : perflenssum ] correspond with the green solid curve in fig .[ fig : absperflenssum ] , which neatly follows the dashed curve of . also observed in fig .[ fig : absperflenssum ] is the corresponding result of another sum approximation where the lorentzians are wider .hence one observes the trend that as decreases the lorentzian sum approximates increasingly well .[ fig : perflenssumcomp ] displays the imaginary parts for one out of every five lorentzians in the sum corresponding with fig . [fig : perflenssum ] .sections [ sec : nies ] and [ sec : perfectlense ] have demonstrated that useful responses which do not arise in any natural systems can nevertheless be approximated as a superposition of ordinary lorentzian resonances to any desired precision . considering that lorentzian resonances can be realized and tailored for both permittivities and permeabilities in numerous metamaterial realizations , this is of particular interest for the prospect of engineering desired metamaterial responses .for instance , in the case of an array of split ring cylinders one may find for the effective permeability : where is the capacitance per in the split ring cylinder , is its radius , is the resistance per circumference - length ratio and is the fractional volume of the unit cell occupied by the interior of the cylinder . for small bandwidthsthe presence of in the numerator is not significant and approximates a lorentzian response . by comparing it with onemay determine the lorentzian parameters as the resonator strength is expressed where is the dimension of the unit cell .hence the resonance frequencies , widths , and strengths can be tailored by varying , , .it may be noted that split ring cylinders display large resistivity in the optical regime , making it difficult to achieve narrow lorentzian responses there .for this reason other systems , such as metamolecules of nanoparticles , have been proposed for optical purposes . in order to realize a sum of lorentz resonances such as those leading to fig .[ fig : summedlor ] and fig .[ fig : perflenssumcomp ] by means of split ring cylinders , one could propose to place different cylinders in each unit cell .the idea would be to realize and superpose a number of different lorentzian resonances of the form corresponding with different values of , , .the density of each type of split ring cylinder can then be thought to give the appropriate resonance strength .however , as it is known that two dipoles in close vicinity influence each others dipole moment , it is not intuitively clear that the total response will be as simple as a superposition of the individual responses . the remainder of this section will therefore be used to demonstrate this . towards this end , the procedure outlined in will be modified to calculate the effective permeability of a split ring cylinder metamaterial when every unit cell contains two split ring cylinders of different radii ( fig .[ fig : unitcell ] ) .the effective permittivity is expressed by finding the effective macroscopic fields and over the array from the corresponding actual fields and in each unit cell : one may show that where is the surface current per unit length along the circumference of the cylinders with radii and respectively . when finding an expression for the field inside each cylinder and , is used to express both split ring cylinders observe the same interaction field contained in , application of faraday s law to each cylinder leads to two equations that may be solved individually for and : in order to evaluate , one may now rearrange to find an expression for in terms of and , for which one in turn can use to find here and is the volume fraction occupied by each split ring cylinder . by comparison withone observes that the resulting here is simply the superposition of the individual split ring cylinder responses as found in .this comes as a consequence of the interaction field sensed by both split ring cylinders being uniform .this description of the interaction is valid as long as the cylinders are long , for which the returning field lines at the end of each cylinder are spread uniformly over the unit cell . finally , the analysis presented here is easily generalized for unit cells with many different split ring cylinders , for which remains valid and labels the different split ring cylinders in the unit cell .( 1,1.11714446 ) ( 0,0 ) containing two different cylinders.,title="fig : " ] ( 0.18851057,0.95124317)(0,0)[lb ] ( 0.53739076,0.72547227)(0,0)[lb ] ( 0.48,0.00665475)(0,0)[lb ] towards the goal of engineering a desired response by use of a finite number of lorentzians with non - zero widths ( ) , it is of interest to quantify the precision of such an approximation .an error estimate can be found by considering the combined error introduced in both the delta - function approximation of for finite , and the riemann sum approximation of the integral .the total error may therefore be bounded according to where represents without the limit and represents the sum . in what follows , bounds for the riemann sum approximation error and delta - convergence error shall be derived in turn .defining , and naming the integrand in , one may find an upper bound on the riemann approximation error to be where in obtaining the last inequality in , it has been used that is dominated by the curvature of the lorentzian for sufficiently small . after some algebra one finds when assuming that .note that can be rewritten { \mathrm{im}}\chi ( \omega_0 ) \nonumber \\ & & { \mathrm{im}}\bigg \ { \frac{\omega_0}{\omega_0 ^ 2-\omega^2-i\omega\gamma } \bigg \ } { \text{d}}\omega_0.\end{aligned}\ ] ] by taking the limit expression becomes of the form , and hence the definition of delta convergence can be used to find the limit of : { \mathrm{im}}\chi(\omega).\end{aligned}\ ] ] it is observed that vanishes as .for the error involved in replacing by its limiting value becomes small .therefore , for most goal functions , where the sum covers the frequency interval of interest , the error described by is roughly as small as outside of this interval .the delta - convergence approximation error which arises when using lorentzians of finite width ( ) rather than delta - functions in , shall now be evaluated .consider fig .[ fig : deltakonv ] : the imaginary part of an arbitrary response is to be approximated according to with finite , and the positive frequency peak of the imaginary part of a lorentzian is displayed .the lorentzian can be expanded in terms of partial fractions as : where corresponds to positive and negative frequency peaks of the lorentzian respectively , according to : ( 1,0.52763968 ) ( 0,0 ) to be expressed as a superposition of finite - sized lorentzians.,title="fig : " ] ( 0.480,0.077)(0,0)[lb ] ( 0.95,0.06913388)(0,0)[lb ] ( 0.480,0.00977939)(0,0)[lb ] ( 0.167009,0.43)(0,0)[lb ] ( 0.52,0.27648538)(0,0)[lb ] introducing into ( without the limit ) then allows the delta - convergence error to be expressed as here , by having made an appropriate substitution , it is observed in the last line that the integral can be written in terms of the lorentz - cauchy function : as a consequence of the substitution , a new function is defined in which essentially represents a shifted version of .the integration interval in may be divided into three intervals corresponding to the intervals designated in fig . [fig : deltakonv ] , which shall then be evaluated separately : here the parameter is in principle arbitrary , but can be thought to represent the region around where is assumed to be taylor expandable in the lowest orders .evaluating integrals ( a ) and ( c ) in together , an upper bound on their sum may be found : \label{eq : upperboundside}\end{aligned}\ ] ] here it has been used that .one observes that as the -term can be expanded , allowing to be re - expressed as which evidently converges to zero when as long as is bounded . in order to calculate ( b ), is expanded around : \label{eq : taylorexpnded } \ ] ] due to the parity when inserting this into ( b ) only even order terms remain , giving only -terms to the first order have been kept in arriving at this expression .the term corresponds to terms containing higher order derivatives of , which will be assumed to be negligible . in the eventthat there exists significant higher order derivatives ( e.g. as will be the case in fig .[ fig : example1 ] near the steep drop ) , one may keep more terms in going from to until the next even ordered derivative is negligible .the same steps that now follow may then be applied in order to find the relevant error estimate . in order to arrive at an expression for in terms of the goal function and its derivatives ,the shifted function is expanded under the assumption that and then inserted for and its second derivative in .expanding under the assumption that permits further simplification . combining the resulting expression with , yields an upper bound on the delta - convergence error where arises after having expanded and replaced and its second derivative . minimizing the upper bound with respect to then gives an inverse relationship between and is intuitive given that must be small when varies steeply .inserting into while neglecting gives hence , the delta - convergence error is proportional to under the assumption that , the higher order derivatives are negligible , and .[ fig : perflenserrorbound ] shows applied to the imaginary part of the perfect lens response displayed in fig .[ fig : perflenssum ] , where one observes that the actual error ( black curve ) is bounded by ( dashed curve ) .the displayed error is actually taken to be the total error of the superposition , rather than the difference between and .however , here in the sum has been reduced by a factor 2 as compared to in fig .[ fig : perflenssum ] , in order to reduce the significance of the riemann sum error in comparison to the delta - convergence approximation error . with a lorentzian superpositionis shown to be bounded by .,scaledwidth=44.0% ]it has been demonstrated that superpositions of lorentzian functions are capable of approximating all complex - valued functions satisfying the kramers - kronig relations , to any desired precision .the typical text - book analysis of dielectric or magnetic dispersion response functions in terms of lorentzians is thereby extended to cover the whole class of causal functions .the discussion started with approximating the imaginary part of an arbitrary susceptiblity by a superposition of imaginary parts of lorentzian functions .the error was then shown to become arbitrarily small as , and . sincethis error was shown to vanish in it was known that . from thisthe superposition was found , which approximates to any desired precision .the superposition has been demonstrated to reconstruct two response functions that obey the kramers - kronig relations while not resembling typical lorentzian resonance behavior .the first response is known to lead to significant negative real values of the susceptibility in a narrow bandwidth with arbitrarily low gain or loss , while the other response represents the optimum realization of a perfect lens on a bandwidth .these examples demonstrate the possibility of viewing lorentzians as useful building - blocks for manufacturing desired responses that are not found in conventional materials . to this end, the ability to realize lorentzian sums using split ring cylinders with varying parameters was shown . in order to quantify the precision of a superposition approximation ,error estimates have been derived .this section will prove and equivalently . in doing so , observe that an implicit definition of the delta - function is { \text{d}}\omega_0 .\nonumber \\\end{aligned}\ ] ] note that the last line holds only insofar as the goal function obeys odd symmetry . by inserting into and using the symmetry , one observes that the goal of this section becomes to demonstrate that inserting for in the integral and making the appropriate substitution leads to : one may then move the limit inside the integral under the criteria of lebesgue s dominated convergence theorem . as the function converges pointwise to .furthermore , it is clear that an integrable function that bounds the integrand for all values of exists under the condition that is bounded .hence , gives which concludes the proof .this section considers the conditions upon , and needed in order to obtain the convergence expressed in . from the discussion in sec .[ sec : errorestimate ] , one may express : the riemann sum error is expressed through and , and the delta - convergence error from and is expressed in , and .the term in is included due to having expanded and replaced and its second derivative so as to express in terms of and its second derivative .the bound given by - has been derived under the assumption that . before proceeding to evaluate the norm by these, one must therefore demonstrate that is bounded in a region where ( representing the region not accounted for by - ) , so that as and then , it is known that the norm converges to zero also here .a detailed proof of this is will not be given here , but its result can be understood by noting that , when without its limit , is bounded by an integral of on the interval multiplied with . since the integral is finite and is analytic , it follows that is bounded . furthermore , since the sum can be made arbitrarily close to , it is intuitive that it also remains bounded for all . since and are bounded it follows that is finite for all .proceeding now to evaluate the norm from - , convergence for is achieved by setting and taking the limit , since is in ( the presence of the term will be discussed below ) .the same convergence occurs in as .the norm arising for converges provided that is in .since it however is conceivable for a function in to have derivatives not in , one may instead consider along the line , where is an arbitrarily small parameter .one then finds through the fourier transform \text{e}^{i\omega t } { \text{d}}t , \label{eq : fourierintegral}\end{aligned}\ ] ] given that is the time - domain response associated with .reveals that one may express since is in one observes that is in for . knowing that the fourier transform preserves the norm, it follows that is in . the terms and in and involve multiples of either or its derivatives . from itis clear that derivatives of all orders are assured to be in by the above procedure .hence these terms vanish as .note that in the derivation of , derivatives greater than the second order of have been neglected , as discussed with regards to .if these were to be included here , they would lead to terms of the same form as and would converge in the same manner .considering now , one observes that by using one may express this as : to find the limit of the norm , the task here is therefore to evaluate the limit inside the outermost integral is permitted through lebesgue s dominated convergence theorem , under the condition that is bounded . in the event that is singular on the real axis , one may instead use ( where is an arbitrarily small parameter ) for which any divergence is quenched , as discussed in the introduction .this gives when having used .the final result reveals that the norm is finite , and that one however must demand in order that converges to zero .the remaining task is to evaluate through the limit under the same condition as before one may move the limit inside the outermost integral through lebesgue s dominated convergence theorem : here the delta - convergence property of the lorentz - cauchy function has been used .20ifxundefined [ 1 ] ifx#1 ifnum[ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _ _ ( , ) http://books.google.no/books?id=tyeakaeacaaj[__ ] ( , ) http://books.google.no/books?id=wv-wrtrdwt0c[__ ] ( , ) link:\doibase 10.1109/jproc.2011.2114631 [ * * , ( ) ] _ _ ( , ) * * , ( ) link:\doibase 10.1364/josab.30.000370 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.99.107401 [ * * , ( ) ] link:\doibase 10.1109/tmtt.2003.819193 [ * * , ( ) ] link:\doibase 10.1109/tmtt.2005.845188 [ * * , ( ) ] link:\doibase 10.1109/tmtt.2004.823579 [ * * , ( ) ] link:\doibase 10.1109/tap.1962.1137809 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1364/ol.37.002133 [ * * , ( ) ] link:\doibase 10.1364/josab.20.002448 [ * * , ( ) ] link:\doibase 10.1063/1.4746400 [ * * , ( ) ] * * , ( ) * * , ( ) | we prove that all functions obeying the kramers - kronig relations can be approximated as superpositions of lorentzian functions , to any precision . as a result , the typical text - book analysis of dielectric dispersion response functions in terms of lorentzians may be viewed as encompassing the whole class of causal functions . a further consequence is that lorentzian resonances may be viewed as possible building blocks for engineering any desired metamaterial response , for example by use of split ring resonators of different parameters . two example functions , far from typical lorentzian resonance behavior , are expressed in terms of lorentzian superpositions : a steep dispersion medium that achieves large negative susceptibility with arbitrarily low loss / gain , and an optimal realization of a perfect lens over a bandwidth . error bounds are derived for the approximation . |
statistical models can be broadly classified into _parametric _ and _ nonparametric _ models .parametric models , indexed by a finite dimensional set of parameters , are focused , easy to analyse and have the big advantage that when correctly specified , they will be very efficient and powerful .however , they can be sensitive to misspecifications and even mild deviations of the data from the assumed parametric model can lead to unreliabilities of inference procedures .nonparametric models , on the other hand , do not rely on data belonging to any particular family of distributions . as they make fewer assumptions ,their applicability is much wider than that of corresponding parametric methods .however , this generally comes at the cost of reduced efficiency compared to parametric models .standard time series literature is dominated by parametric models like autoregressive integrated moving average models , the more recent autoregressive conditional heteroskedasticity models for time - varying volatility , state - space , and markov switching models . in particular , bayesian time series analysis is inherently parametric in that a completely specified likelihood function is needed .nonetheless , the use of nonparametric techniques has a long tradition in time series analysis . introduced the periodogram which may be regarded as the origin of spectral analysis and a classical nonparametric tool for time series analysis ._ frequentist _ time series analyses especially use nonparametric methods including a variety of bootstrap methods , computer - intensive resampling techniques initially introduced for independent data , that have been taylored to and specifically developed for time series . an important class of nonparametric methods is based on frequency domain techniques , most prominently smoothing the periodogram .these include a variety of frequency domain bootstrap methods strongly related to the whittle likelihood and found important applications in a variety of disciplines .despite the fact that _ nonparametric bayesian _ inference has been rapidly expanding over the last decade , as reviewed by , , and , only very few nonparametric bayesian approaches to time series analysis have been developed .most notably , , , , , , and used the whittle likelihood for bayesian modelling of the spectral density as the main nonparametric characteristic of stationary time series .the whittle likelihood is an approximation of the true likelihood .it is exact only for gaussian white noise .however , even for non - gaussian stationary time series which are not completely specified by their first and second - order structure , the whittle likelihood results in asymptotically correct statistical inference in many situations .as shown in , the loss of efficiency of the nonparametric approach using the whittle likelihood can be substantial even in the gaussian case for small samples if the autocorrelation of the gaussian process is high .on the other hand , parametric methods are more powerful than nonparametric methods if the observed time series is close to the considered model class but fail if the model is misspecified . to exploit the advantages of both parametric and nonparametric approaches ,the autoregressive - aided periodogram bootstrap has been developed by within the frequentist bootstrap world of time series analysis .it fits a parametric working model to generate periodogram ordinates that mimic the essential features of the data and the weak dependence structure of the periodogram while a nonparametric correction is used to capture features not represented by the parametric fit .this has been extended in various ways ( see ) .its main underlying idea is a nonparametric correction of a parametric likelihood approximation .the parametric model is used as a proxy for rough shape of the autocorrelation structure as well as the dependency structure between periodogram ordinates .sensitivities with respect to the spectral density are mitigated through a nonparametric amendment .we propose to use a similar nonparametrically corrected likelihood approximation as a pseudo - likelihood in the bayesian framework to compute the pseudo - posterior distribution of the power spectral density ( psd ) and other parameters in time series models .this will yield a pseudo - likelihood that generalises the widely used whittle likelihood which , as we will show , can be regarded as a special case of a nonparametrically corrected likelihood under the gaussian i.i.d.working model .software implementing the methodology is available in the ` r ` package ` beyondwhittle ` , which is available on the comprehensive r archive network ( cran ) , see .the paper is structured as follows : in chapter [ sec : likelihood ] , we briefly revisit the whittle likelihood and demonstrate that it is a nonparametrically corrected likelihood , namely that of a gaussian i.i.d . working model .then , we extend this nonparametric correction to a general parametric working model .the corresponding pseudo - likelihood turns out to be equal to the true likelihood if the parametric working model is correctly specified but also still yields asymptotically unbiased periodogram ordinates if it is not correctly specified . in chapter[ sec : bayesian ] , we propose a bayesian nonparametric approach to estimating the spectral density using the pseudo - posterior distribution induced by the corrected likelihood of a fixed parametric model .we describe the gibbs sampling implementation for sampling from the pseudo - posterior .this nonparametric approach is based on the bernstein polynomial prior of and used to estimate the spectral density via the whittle likelihood in .we show posterior consistency of this approach and discuss how to incorporate the parametric working model in the bayesian inference procedure .chapter [ sec : simulations ] gives results from a simulation study , including case studies of sunspot data , and gravitational wave data from the sixth science run of the laser interferometric gravitational wave observatory ( ligo ) .this is followed by discussion in chapter [ sec : summary ] , which summarises the findings and points to directions for future work .the proofs , the details about the bayesian autoregressive sampler as well as some additional simulation results are deferred to the appendices [ sec : proofs ] [ sec_appendixsims ] .while the likelihood of a mean zero gaussian time series is completely characterised by its autocovariance function , its use for nonparametric frequentist inference is limited as it requires estimation in the space of positive definite covariance functions . similarly for nonparametric bayesian inference , it necessitates the specification of a prior on positive definite autocovariance functions which is a formidable task . a quick fix is to use parametric models such as arma models with data - dependent order selection , but these methods tend to produce biased results when the arma approximation to the underlying time series is poor .a preferable nonparametric route is to exploit the correspondence of the autocovariance function and the spectral density via the wiener - khinchin theorem and nonparametrically estimate the spectral density . to this end , defined a pseudo - likelihood , known as the whittle likelihood , that directly depends on the spectral density rather than the autocovariance function and that gives a good approximation to the true gaussian and certain non - gaussian likelihoods . in the following subsectionwe will revisit this approximate likelihood proposed by , before introducing a semiparametric approach which extends the whittle likelihood .assume that is a real zero mean stationary time series with absolutely summable autocovariance function .under these assumptions the spectral density of the time series exists and is given by the fourier transform ( ft ) of the autocovariance function consequently , there is a one - to - one - correspondence between the autocovariance function and the spectral density , and estimation of the spectral density is amenable to smoothing techniques .the idea behind these smoothing techniques is the following observation , which also gives rise to the so - called whittle approximation of the likelihood of a time series : consider the periodogram of , the periodogram is given by the squared modulus of the discrete fourier coefficients , the fourier transformed time series evaluated at fourier frequencies , for .it can be obtained by the following transformation : define for where and for even , is defined analogously . then , is an orthonormal matrix ( cf .e.g. , paragraph 10.1 ) .real- and imaginary parts of the discrete fourier coefficients are collected in the vector and the periodogram can be written as it is well known that the periodograms evaluated at two different fourier frequencies are asymptotically independent and have an asymptotic exponential distribution with mean equal to the spectral density , a statement that remains true for non - gaussian and even non - linear time series .similarly , the fourier coefficients are asymptotically independent and normally distributed with variances equal to times the spectral density at the corresponding frequency .this result gives rise to the following whittle approximation in the frequency domain by the likelihood of a gaussian vector with diagonal covariance matrix as explicitly shown in appendix [ sec : proofs ] , this yields the famous whittle likelihood in the time domain via the transformation theorem which provides an approximation of the true likelihood .it is exact only for gaussian white noise in which case .it has the advantage that it depends directly on the spectral density in contrast to the true likelihood that depends on indirectly via wiener - khinchin s theorem .sometimes , the summands corresponding to as well as ( the latter for even ) are omitted in the likelihood approximation .in fact , the term corresponding to contains the sample mean ( squared ) while the term corresponding to gives the alternating sample mean ( squared ) . both have somewhat different statistical properties and usually need to be considered separately .furthermore , the first term is exactly zero if the methods are applied to time series that have been centered first , while the last one is approximately zero and asymptotically negligible ( refer also remark [ rem_firstlast ] ) . the density of under the i.i.d .standard gaussian working model is the whittle likelihood .it has two potential sources of approximation errors : the first one is the assumption of independence between fourier coefficients which holds only asymptotically but not exactly for a finite time series , the second one is the gaussianity assumption . in this paper , we restrict our attention to the first problem , extending the proposed methods to non - gaussian situations will be a focus of future work .in fact , the independence assumption leads to asymptotically consistent results for gaussian data .but even for gaussian data with relatively small sample sizes and relatively strong correlation the loss of efficiency of the nonparametric approach using the whittle likelihood can be substantial as shown in or by the simulation results of .the central idea in this work is to extend the whittle likelihood by proceeding from a certain parametric working model ( with mean 0 ) for rather than an i.i.d .standard gaussian working model before making a correction analogous to the whittle correction in the frequency domain .to this end , we start with some parametric likelihood in the time domain , such as e.g. obtained from an arma - model , that is believed to be a reasonable approximation to the true time series .we denote the spectral density that corresponds to this parametric working model by . if the model is misspecified , then this spectral density is also wrong and needs to be corrected to obtain the correct second - order dependence structure . to this end, we define a correction matrix this is analogous to the whittle correction in the previous section as , in particular , with as in .however , the corresponding periodogram ordinates are no longer independent under this likelihood but instead inherit the dependence structure from the original parametric model ( see proposition [ cor_31 ] c ) .such an approach in a bootstrap context has been proposed and successfully applied by using an ar( ) approximation .this concept of a nonparametric correction of a parametric time domain likelihood is illustrated in the schematic diagram : as a result we obtain the following nonparametrically corrected likelihood function under the parametric working model where denotes the parametric likelihood . [ rem_identifiability ] parametric models with a multiplicative scale parameter yield the same corrected likelihood as the one with , i.e. if is used as working model this leads to the same corrected likelihood for all .for instance , if the parametric model is given by i.i.d . random variables with arbitrary , then the correction also results in the whittle likelihood ( for a proof we refer to appendix [ sec : proofs ] ) .analogously , for linear models , , which includes the class of arma - models , the corrected likelihood is independent of . we can now prove the following proposition which shows two important things : first , the corrected likelihood is the exact likelihood in case the parametric model is correct .second , the periodograms associated with this likelihood are asymptotically unbiased for the true spectral density regardless of whether the parametric model is true . [ cor_31 ] let be a real zero mean stationary time series with absolutely summable autocovariance function and let for be the spectral density associated with the ( mean zero ) parametric model used for the correction . 1 . if , then .the periodogram associated with the corrected likelihood is asymptotically unbiased for the true spectral density , i.e. where the convergence is uniform in .furthermore , the proof shows that the vector of periodograms under the corrected likelihood has exactly the same distributional properties as the vector of the periodograms under the parametric likelihood multiplied with .hence , asymptotic properties as the ones derived in theorem 10.3.2 in carry over with the appropriate multiplicative correction . in the remainder of the paperwe describe how to make use of this nonparametric correction in a bayesian set - up .to illustrate the bayesian semiparametric approach and how to sample from the pseudo - posterior distribution , in the following we restrict our attention to an ar( ) model as our parametric working model for the time series , i.e. , where are i.i.d .n( random variables with density denoted by .note that without loss of generality , , cf .remark [ rem_identifiability ] .this yields the parametric likelihood of our working model , depending on the order and on the coefficients : with spectral density we assume the time series to be stationary and causal a priori .thus , is restricted such that has no zeros inside the closed unit disc , c.f .theorem 3.1.1 . in . fornow , we assume that the parameters of the parametric working model are fixed ( and in practice set to bayesian point estimates obtained from a preceding parametric estimation step ) .an extension to combine the estimation of the parametric model with the nonparametric correction will be detailed later in section [ sec_priorparametric ] . for a bayesian analysis usingeither the whittle or nonparametrically corrected likelihood , we need to specify a nonparametric prior distribution for the spectral density . herewe employ the approach by which is essentially based on the bernstein polynomial prior of as a nonparametric prior for a probability density on the unit interval .we briefly describe the prior specification and refer to for further details .in contrast to the approach in , we do not specify a nonparametric prior distribution for the spectral density , but for a pre - whitened version thereof , incorporating the spectral density of the parametric working model into the estimation .to elaborate , for , consider the _ eta - damped correction function _ this corresponds to a reparametrization of the likelihood by replacing with .[ rem_etacorrection ] the parameter models the confidence in the parametric model : if is close to and the model is well - specified , then will be much smoother than the original spectral density , since already captures the prominent spectral peaks of the data very well . as a consequence ,nonparametric estimation of should involve less effort than nonparametric estimation of itself .this remains true in the misspecified case , as long as the parametric model does describe the essential features of the data sufficiently well in the sense that it captures at least the more prominent peaks .however , it is possible that the parametric model introduces erroneous spectral peaks if the model is misspecified . in that case, close to zero ensures a damping of the model misspecification , such that nonparametric estimation of should involve less effort than nonparametric estimation of .the choice of will be detailed in section [ sec_priorparametric ] , but for now , is assumed fixed .we reparametrise to a density function on ] where =g(v)-g(u) ] and exists for . by an application of the markov inequality , we get for all =e^{-n [ -zx+\frac 1 2 \log(1 + 2z)]}.\end{aligned}\ ] ] the function attains its maximum at and as for . thus ,setting , we obtain similarly , under , we get since ^{-1/2})=\mbox{det}(d_n^{1/2}(f_1/f_0)-\lambda i_n)\end{aligned}\ ] ] the matrix has the eigenvalues , .since is a normal matrix ( recall that is symmetric positive definite ) , we find that has the eigenvalues , .consequently , consequently , analogously to the proof under the null hypothesis , we get for any now , , attains its maximum at with as for , i.e. . setting yields , completing the proof .we follow , proof of theorem 1 , and show ( c1 ) and ( c2 ) of theorem [ theorem : consistency - general ] above .let denote the corrected likelihood and define }|c_{0,\eta}(\lambda)| ] .an analogous argument as in appendix b.1 .of shows that for all the set has positive prior probability under the bernstein polynomial prior on .we need to show ( c1)(a ) and ( b ) . to this end , let where for we have . to prove ( c1 ) ,first note that where .for it holds as well as .furthermore , by analogous argument as in the proof of lemma [ lemma_b3 ] we get that the eigenvalues of are given by , , hence as a consequence , we get similar arguments yield \\ & = \frac{1}{2n^2}\sum_{i=1}^n\left ( \frac{c_{0,\eta}(\lambda_i)-c(\lambda_i)}{c(\lambda_i ) } \right)^2 = o(1/n)\;\|c - c_{0,\eta}\|_{\infty}^2 \to 0.\end{aligned}\ ] ] the proof can now be concluded as in by replacing lemma b.3 by lemma [ lemma_b3 ] above .for fixed order , the autoregressive model with i.i.d .n( ) random variables is parametrized by the the innovation variance and the partial autocorrelation structure , where is the conditional correlation between and given .note that and that there is a one - to - one relation between and .to elaborate , we follow and introduce the auxiliary variables as solutions of the following yule - walker - type equation with the autocorrelations /{\operatorname{e}}z_1 ^ 2 $ ] : as shown in , the well - known relationships readily imply and ( recall that for ) the recursive formula we specify the following prior assumptions for the model parameters : all a priori independent .we use .furthermore , we employ the gaussian likelihood with the n( ) density and the autocovariance matrix of the autoregressive model . to draw samples from the corresponding posterior distribution, we use a gibbs sampler , where the conjugate full conditional for is readily sampled . for the full conditional for is sampled with the metropolis algorithm using a normal random walk proposal density with proposal variance . as for the parameters of the working model within the corrected parametric approach ( see section [ sec_priorparametric ] ) , the proposal variances are adjusted adaptively during the burn - in period , aiming for a respective acceptance rate of 0.44 .table 2 depicts further results of the ar , np and npc procedures for normal arma data . for white noise ,all procedures yield good results , whereas np is superior due to the implicitly well - specified white noise working model ( recall that the order within ar and npc are estimated with the dic ) . the results for ma(2 )data are qualitatively similar to the results for ma(1 ) data in table 1 .the spectral peaks of the ar(2 ) model are not as strong as the spectral peak of the ar(1 ) model considered in table 1 .accordingly , it can be seen that the np results are better in this case .the npc benefits again from the well - specified parametric model , yielding results that are comparable to the ar procedure . | * abstract . * the whittle likelihood is widely used for bayesian nonparametric estimation of the spectral density of stationary time series . however , the loss of efficiency for non - gaussian time series can be substantial . on the other hand , parametric methods are more powerful if the model is well - specified , but may fail entirely otherwise . therefore , we suggest a nonparametric correction of a parametric likelihood taking advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment . using a bernstein - dirichlet prior for the nonparametric spectral correction , we show posterior consistency and illustrate the performance of our procedure in a simulation study and with ligo gravitational wave data . |
recent years have witnessed much interest in cognitive radio systems due to their promise as a technology that enables systems to utilize the available spectrum much more effectively .this interest has resulted in a spur of research activity in the area . in , asghari andaissa , under constraints on the average interference caused at the licensed user over rayleigh fading channels , studied two adaptation policies at the secondary user s transmitter in a cognitive radio system one of which is variable power and the other is variable rate and power .they maximized the achievable rates under the above constraints and the bit error rate ( ber ) requirement in mqam modulation .the authors in derived the fading channel capacity of a secondary user subject to both average and peak received - power constraints at the primary receiver .in addition , they obtained optimum power allocation schemes for three different capacity notions , namely , ergodic , outage , and minimum - rate ._ in studied the performance of spectrum - sensing radios under channel fading .they showed that due to uncertainty resulting from fading , local signal processing alone may not be adequate to meet the performance requirements .therefore , to remedy this uncertainty they also focused on the cooperation among secondary users and the tradeoff between local processing and cooperation in order to maximize the spectrum utilization .furthermore , the authors in focused on the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected .they formulated the sensing - throughput tradeoff problem mathematically , and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network .moreover , quan __ in introduced a novel wideband spectrum sensing technique , called multiband joint detection , which jointly detects the signal energy levels over multiple frequency bands rather than considering one band at a time . in many wireless systems, it is very important to provide reliable communications while sustaining a certain level of quality - of - service ( qos ) under time - varying channel conditions .for instance , in wireless multimedia transmissions , stringent delay qos requirements need to be satisfied in order to provide acceptable performance levels . in cognitive radio systems ,challenges in providing qos assurances increase due to the fact that secondary users should operate under constraints on the interference levels that they produce to primary users . for the secondary users , these interference constraints lead to variations in transmit power levels and channel accesses .for instance , intermittent access to the channels due to the activity of primary users make it difficult for the secondary users to satisfy their own qos limitations .these considerations have led to studies that investigate the cognitive radio performance under qos constraints .musavian and aissa in considered variable - rate , variable - power mqam modulation employed under delay qos constraints over spectrum - sharing channels . as a performance metric , they used the effective capacity to characterize the maximum throughput under qos constraints .they assumed two users sharing the spectrum with one of them having a primary access to the band .the other , known as secondary user , is constrained by interference limitations imposed by the primary user .considering two modulation schemes , continuous mqam and discrete mqam with restricted constellations , they obtained the effective capacity of the secondary user s link , and derived the optimum power allocation scheme that maximizes the effective capacity in each case .additionally , in , they proposed a qos constrained power and rate allocation scheme for spectrum sharing systems in which the secondary users are allowed to use the spectrum under an interference constraint by which a minimum - rate of transmission is guaranteed to the primary user for a certain percentage of time .moreover , applying an average interference power constraint which is required to be fulfilled by the secondary user , they obtained the maximum arrival - rate supported by a rayleigh block - fading channel subject to satisfying a given statistical delay qos constraint .we note that in these studies on the performance under qos limitations , channel sensing is not incorporated into the system model . as a result ,adaptation of the cognitive transmission according to the presence or absence of the primary users is not considered . in , where we also concentrated on cognitive transmission under qos constraint, we assumed that the secondary transmitter sends the data at two different fixed rates and power levels , depending on the activity of the primary users , which is determined by channel sensing performed by the secondary users .we constructed a state transition model with eight states to model this cognitive transmission channel , and determined the effective capacity . on the other hand, we assumed in that channel sensing is done only in one channel , and did not impose explicit interference constraints . in this paper , we study the effective capacity of cognitive radio channels where the cognitive radio detects the activity of primary users in a multiband environment and then performs the data transmission in one of the transmission channels .both the secondary receiver and the secondary transmitter know the fading coefficients of their own channel , and of the channel between the secondary transmitter and the primary receiver .the cognitive radio has two power allocation policies depending on the activities of the primary users and the sensing decisions .more specifically , the contributions of this paper are the following : 1 .we consider a scenario in which the cognitive system employs multi - channel sensing and uses one channel for data transmission thereby decreasing the probability of interference to the primary users .we identify a state - transition model for cognitive radio transmission in which we compare the transmission rates with instantaneous channel capacities , and also incorporate the results of channel sensing .3 . we determine the effective capacity of the cognitive channel under limitations on the average interference power experienced by the primary receiver .we identify the optimal criterion to select the transmission channel out of the available channels and obtain the optimal power adaptation policies that maximize the effective capacity .we analyze the interactions between the effective capacity , qos constraints , channel sensing duration , channel detection threshold , detection and false alarm probabilities through numerical techniques .the organization of the rest of the paper is as follows : in section [ system ] , we discuss the channel model and analyze multi - channel sensing .we describe the channel state transition model in section [ subsec : state ] under the assumption that the secondary users have perfect csi and send the data at rates equal to the instantaneous channel capacity values . in section [ sec : consraint ] , we analyze the received interference power at the primary receiver and apply this as a power constraint on the secondary users . in section[ effective capacity ] , we define the effective capacity and find the optimal power distribution and show the criterion to choose the best channel .numerical results are shown in section [ numericalresults ] , and conclusions are provided in section [ conlusion ] .in this paper , we consider a cognitive radio system in which secondary users sense channels and choose one channel for data transmission .we assume that channel sensing and data transmission are conducted in frames of duration seconds . in each frame, seconds is allocated for channel sensing while data transmission occurs in the remaining seconds .transmission power and rate levels depend on the primary users activities .if all of the channels are detected as busy , transmitter selects one channel with a certain criterion , and sets the transmission power and rate to and , respectively , where is the index of the selected channel and denotes the time index . note that if , transmitter stops sending information when it detects primary users in all channels .if at least one channel is sensed to be idle , data transmission is performed with power and at rate .if multiple channels are detected as idle , then one idle channel is selected again considering a certain criterion .the discrete - time channel input - output relation between the secondary transmitter and receiver in the symbol duration in the channel is given by if the primary users are absent .on the other hand , if primary users are present in the channel , we have where and denote the complex - valued channel input and output , respectively . in ( [ input - out1 ] ) and ( [ input - out2 ] ) , is the channel fading coefficient between the cognitive transmitter and the receiver .we assume that has a finite variance , i.e. , , but otherwise has an arbitrary distribution .we define .we consider a block - fading channel model and assume that the fading coefficients stay constant for a block of duration seconds and change from one block to another independently in each channel . in ( [ input - out2 ] ) , represents the active primary user s faded signal arriving at the secondary receiver in the channel , and has a variance . models the additive thermal noise at the receiver , and is a zero - mean , circularly symmetric , complex gaussian random variable with variance for all .we assume that the bandwidth of the channel is . in the absence of detailed information on primary users transmission policies ,energy - based detection methods are favorable for channel sensing . knowing that wideband channels exhibit frequency selective features, we can divide the band into channels and estimate each received signal through its discrete fourier transform ( dft ) .the channel sensing can be formulated as a hypothesis testing problem between the noise and the signal in noise .noting that there are complex symbols in a duration of seconds in each channel with bandwidth , the hypothesis test in channel can mathematically be expressed as follows : for the above detection problem , the optimal neyman - pearson detector is given by we assume that has a circularly symmetric complex gaussian distribution with zero - mean and variance . assuming further that are i.i.d . , we can immediately conclude that the test statistic is chi - square distributed with degrees of freedom . in this case , the probabilities of false alarm and detection can be established as follows : where denotes the regularized lower gamma function and is defined as where is the lower incomplete gamma function and is the gamma function . in figure[ fig : fig1 ] , the probability of detection , , and the probability of false alarm , , are plotted as a function of the energy detection threshold , , for different values of channel detection duration .note that the bandwidth is and the block duration is .we can see that when the detection threshold is low , and tend to be 1 , which means that the secondary user , always assuming the existence of an active primary user , transmits with power and rate . on the other hand ,when the detection threshold is high , and are close to zero , which means that the secondary user , being unable to detect the activity of the primary users , always transmits with power and rate , possibly causing significant interference .the main purpose is to keep as close to 1 as possible and as close to 0 as possible .therefore , we have to keep the detection threshold in a reasonable interval .note that the duration of detection is also important since increasing the number of channel samples used for sensing improves the quality of channel detection . in the hypothesis testing problem in ( [ hypothesis ] ) , another approach is to consider as gaussian distributed , which is accurate if is large . in this case , the detection and false alarm probabilities can be expressed in terms of gaussian -functions .we would like to note the rest of the analysis in the paper does not depend on the specific expressions of the false alarm and detection probabilities .however , numerical results are obtained using ( [ false alarm ] ) and ( [ eq : probdetect ] ) .in this paper , we assume that both the secondary receiver and transmitter have perfect channel side information ( csi ) , and hence perfectly know the realizations of the fading coefficients .we further assume that the wideband channel is divided into channels , each with bandwidth that is equal to the coherence bandwidth .therefore , we henceforth have . with this assumption, we can suppose that independent flat fading is experienced in each channel . in order to further simplify the setting, we consider a symmetric model in which fading coefficients are identically distributed in different channels .moreover , we assume that the background noise and primary users signals are also identically distributed in different channels and hence their variances and do not depend on , and the prior probabilities of each channel being occupied by the primary users are the same and equal to . in channelsensing , the same energy threshold , , is applied in each channel .finally , in this symmetric model , the transmission power and rate policies when the channels are idle or busy are the same for each channel . due to the consideration of a symmetric model , we in the subsequent analysis drop the subscript in the expressions for the sake of brevity .first , note that we have the following four possible scenarios considering the correct detections and errors in channel sensing : _ scenario 1 : _ all channels are detected as busy , and channel used for transmission is actually busy ._ scenario 2 : _ all channels are detected as busy , and channel used for transmission is actually idle . _scenario 3 : _ at least one channel is detected as idle , and channel used for transmission is actually busy . _scenario 4 : _ at least one channel is detected as idle , and channel used for transmission is actually idle . in each scenario, we have one state , namely either on or off , depending on whether or not the instantaneous transmission rate exceeds the instantaneous channel capacity . considering the interference caused by the primary users as additional gaussian noise , we can express the instantaneous channel capacities in the above four scenarios as follows : [ channel capacity ] _ scenario 1 : _ . _scenario 2 : _ ._ scenario 3 : _ ._ scenario 4 : _ .above , we have defined note that denotes the fading power . in scenarios 1 and 2 ,the secondary transmitter detects all channels as busy and transmits the information at rate on the other hand , in scenarios 3 and 4 , at least one channel is sensed as idle and the transmission rate is since the transmitter , assuming the channel as idle , sets the power level to and expects that no interference from the primary transmissions will be experienced at the secondary receiver ( as seen by the absence of in the denominator of ) . in scenarios 1 and 2 ,transmission rate is less than or equal to the instantaneous channel capacity .hence , reliable transmission at rate is attained and channel is in the on state .similarly , the channel is in the on state in scenario 4 in which the transmission rate is . on the other hand , in scenario 3, transmission rate exceeds the instantaneous channel capacity ( i.e. , ) due to miss - detection . in this case, reliable communication can not be established , and the channel is assumed to be in the off state .note that the effective transmission rate in this state is zero , and therefore information needs to be retransmitted .we assume that this is accomplished through a simple arq mechanism . for this cognitive transmission model , we initially construct a state transition model .while the ensuing discussion describes this model , figure [ fig : fig2 ] provides a depiction . as seen in fig .[ fig : fig2 ] , there are on states and 1 off state .the single off state is the one experienced in scenario 3 .the first on state , which is the top leftmost state in fig .[ fig : fig2 ] , is a combined version of the on states in scenarios 1 and 2 in both of which the transmission rate is and the transmission power is .note that all the channels are detected as busy in this first on state .the remaining on states labeled through can be seen as the expansion of the on state in scenario 4 in which at least one channel is detected as idle and the channel chosen for transmission is actually idle .more specifically , the on state for is the on state in which channels are detected as idle and the channel chosen for transmission is idle .note that the transmission rate is and the transmission power is in all on states labeled through .next , we characterize the state transition probabilities .state transitions occur every seconds .we can easily see that the probability of staying in the first on state , in which all channels are detected as busy , is expressed as follows : where is the probability that channel is detected as busy , and and are the probabilities of detection and false alarm , respectively as defined in ( [ eq : probdetect ] ) . recall that denotes the probability that a channel is busy ( i.e. , there are active primary users in the channel ) .it is important to note that the transition probability in ( [ eq : transitionprob ] ) is obtained under the assumptions that the primary user activity is independent among the channels and also from one block to another . indeed , under the assumption of independence over the blocks , the state transition probabilities do not depend on the originating state and hence we have where we have defined for all .similarly , we can obtain for , now , we can easily observe that the transition probabilities for the off state are then , we can easily see that the state transition probability matrix can be expressed as = \left [ \begin{array}{cccc } p_{1 } & . & .& p_{m+2 } \\ .&\quad&\quad&.\\ .&\quad&\quad&.\\p_{1 } & . & . &p_{m+2}\\ \end{array } \right] ] .since is a matrix with unit rank , we can readily find that then , combining ( [ eq : sp ] ) with ( [ eq : theta - envelope ] ) and ( [ exponent ] ) , normalizing the expression with in order to have the effective capacity in the units of bits / s / hz , and considering the maximization over power adaptation policies , we reach to the effective capacity formula given in ( [ eq : effectivecap ] ) . we would like to note that the effective capacity expression in ( [ eq : effectivecap ] ) is obtained for a given sensing duration , detection threshold , and qos exponent . in the next section , we investigate the impact of these parameters on the effective capacity through numerical analysis . before the numerical analysis, we first identify below the optimal power adaptation policies that the secondary users should employ .the optimal power adaptations for the secondary users under the constraint given in ( [ average interference power ] ) are , & \hbox{ } \\ 0 , & \hbox{otherwise } \end{array } \right.,\ ] ] and , & \hbox{ } \\ 0 , & \hbox{otherwise }\end{array } \right.,\ ] ] where , , , and . is a parameter whose value can be found numerically by satisfying the constraint ( [ average interference power ] ) with equality ._ proof : _ since logarithm is a monotonic function , the optimal power adaptation policies can also be obtained from the following minimization problem : it is clear that the objective function in ( [ objectivefunc ] ) is strictly convex and the constraint function in ( [ average interference power ] ) is linear with respect to and . and in ( [ rate1 ] ) and ( [ rate2 ] ) with respect to and respectively , strict convexity of the exponential function , and the fact that the nonnegative weighted sum of strictly convex functions is strictly convex ( * ? ? ?* section 3.2.1 ) . ] then , forming the lagrangian function and setting the derivatives of the lagrangian with respect to and equal to zero , we obtain : \alpha^{m}f_{}(z , z_{sp})=0\label{lamda1}\\ & \hspace{-.5cm}\left[\lambda\rho(1-p_{d})z_{sp}-\frac{c(1-\rho)(1-p_{f})z}{\mu_{2}}\left(1+\frac{zp_{2}}{\mu_{2}}\right)^{-c-1}\right]\sum_{k=1}^{m}\alpha^{m - k}(1-\alpha)^{k-1}\frac{m!}{(m - k)!k!}f_{k}(z , z_{sp})=0\label{lamda2}\end{aligned}\ ] ] where is the lagrange multiplier .above , denotes the joint distribution of of the channel selected for transmission when all channels are detected busy .hence , in this case , the transmission channel is chosen among channels .similarly , denotes the joint distribution when channels are detected idle , and the transmission channel is selected out of these channels . defining and , and solving ( [ lamda1 ] ) and ( [ lamda2 ] ), we obtain the optimal power policies given in ( [ optimalpolicy1 ] ) and ( [ optimalpolicy2 ] ) . now , using the optimal transmission policies given in ( [ optimalpolicy1 ] ) and ( [ optimalpolicy2 ] ) , we can express the effective capacity as follows : above , the subscripts and in the expectations denote that the lower limits of the integrals are equal these values and not to zero .for instance , . until now, we have not specified the criterion with which the transmission channel is selected from a set of available channels . in ( [ eq : optimaleffectivecap ] ) , we can easily observe that the effective capacity depends only on the channel power ratio , and is increasing with increasing due to the fact that the terms and are monotonically decreasing functions of .therefore , the criterion for choosing the transmission band among multiple busy bands unless there is no idle band detected , or among multiple idle bands if there are idle bands detected should be based on this ratio of the channel gains . clearly , the strategy that maximizes the effective capacity is to choose the channel ( or equivalently the frequency band ) with the highest ratio of .this is also intuitively appealing as we want to maximize to improve the secondary transmission and at the same time minimize to diminish the interference caused to the primary users . maximizing us the right balance in the channel selection .we define where is the ratio of the gains in the channel . assuming that these ratios are independent and identically distributed in different channels, we can express the pdf of as ^{m-1},\ ] ] where and are the pdf and cumulative distribution function ( cdf ) , respectively , of , the gain ratio in one channel .now , the expectation , which arises under the assumption that all channels are detected busy and the transmission channel is selected among these channels , can be evaluated with respect to the distribution in ( [ x1 ] ) .similarly , we define for .the pdf of can be expressed as follows : ^{k-1}\quad k=1,2,\dots , m.\ ] ] the expectation can be evaluated using the distribution in ( [ x2 ] ) . finally , after some calculations, we can write the effective capacity in integral form as ^{m-1}\left[\frac{\beta_{1}\lambda}{x}\right]^\frac{c}{c+1}dx\nonumber\\&(1-\rho)(1-p_{f})m\int_{\beta_{2}\lambda}^{\infty}f_{\frac{z}{z_{sp}}}(x)\left[\alpha+(1-\alpha)f_{\frac{z}{z_{sp}}}(x)\right]^{m-1}\left[\frac{\beta_{2}\lambda}{x}\right]^{\frac{c}{c+1}}dx+p_{m+2}\bigg\}.\end{aligned}\ ] ]in this section , we present numerical results for the effective capacity as a function of the channel sensing reliability ( i.e. , detection and false alarm probabilities ) and the average interference constraints . throughout the numerical results, we assume that qos parameter is , block duration is , channel sensing duration is , and the prior probability of each channel being busy is . before the numerical analysis, we first provide expressions for the probabilities of operating in each one of the four scenarios described in section [ subsec : state ] .these probabilities are also important metrics in analyzing the performance .we have in figure [ fig : fig4 ] , we plot these probabilities as a function of the detection probability for two cases in which the number of channels is and , respectively .as expected , we observe that and decrease with increasing .we also see that and are assuming small values when is very close to 1 . note from fig . [fig : fig1 ] that as approaches 1 , the false alarm probability increases as well .the analysis in the preceding sections apply for arbitrary joint distributions of and under the mild assumption that the they have finite means ( i.e. , fading has finite average power ) . in this subsection , we consider a rayleigh fading scenario in which the power gains and are exponentially distributed .we assume that and are mutually independent and each has unit - mean .then , the pdf and cdf of can be expressed as follows : in fig .[ fig : fig5 ] , we plot the effective capacity vs. probability of detection , , for different number of channels when the average interference power constraint normalized by the noise power is , where is the noise variance at the primary user .we observe that with increasing , the effective capacity is increasing due to the fact more reliable detection of the activity primary users leads to fewer miss - detections and hence the probability of scenario 3 or equivalently the probability of being in state , in which the transmission rate is effectively zero , diminishes .we also interestingly see that the highest effective capacity is attained when .hence , secondary users seem to not benefit from the availability of multiple channels .this is especially pronounced for high values of .although several factors and parameters are in play in determining the value of the effective capacity , one explanation for this observation is that the probabilities of scenarios 1 and 2 , in which the secondary users transmit with power , decrease with increasing , while the probabilities of scenarios 3 and 4 increase as seen in ( [ scenarios ] ) .note that in scenario 3 , no reliable communication is possible and transmission rate is effectively zero . in fig .[ fig : fig6 ] , we display similar results when . hence ,secondary users operate under more stringent interference constraints . in this case, we note that gives the highest throughput while the performance with is strictly suboptimal . in fig .[ fig : fig7 ] , we show the effective capacities as a function ( db ) for different values of when and . confirming our previous observation, we notice that as the interference constraint gets more strict and hence becomes smaller , a higher value of is needed to maximize the effective capacity .for instance , channels are needed when . on the other hand , for approximately ,having gives the highest throughput .above , we have remarked that increasing the number of available channels from which the transmission channel is selected provides no benefit or can even degrade the performance of secondary users under certain conditions .on the other hand , it is important to note that increasing always brings a benefit to the primary users in the form of decreased probability of interference . in order to quantify this type of gain, we consider below the probability that the channel selected for transmission is actually busy and hence the primary user in this channel experiences interference : note that depends on and also through .it can be easily seen that this interference probability decreases with increasing when .as goes to infinity , we have indeed , in this asymptotic regime , becomes zero with perfect detection ( i.e. , with ) .note that secondary users transmit ( if ) even when all channels are detected as busy . as , the probability of such an event vanishes .also , having enables the secondary users to avoid scenario 3 .hence , interference is not caused to the primary users . in fig .[ fig : fig3 ] , we plot vs. the detection probability for different values of .we also display how the false alarm probability evolves as varies from 0 to 1 .it can be easily seen that while when , a smaller is achieved for higher values of unless . on the other hand ,as also discussed above , we immediately note that monotonically decreases to 0 as increases to 1 when is unbounded ( i.e. , ) .nakagami fading occurs when multipath scattering with relatively large delay - time spreads occurs .therefore , nakagami distribution matches some empirical data better than many other distributions do . with this motivation, we also consider nakagami fading in our numerical results .the pdf of the nakagami- random variable is given by where is the number of degrees of freedom .if both and have the same number of degrees of freedom , we can express the pdf of as follows : note also that rayleigh fading is a special case of nakagami fading when . in our experiments, we consider the case in which . now, we can express the cdf of for as in fig .[ fig : fig8 ] , we plot effective capacity vs. ( db ) for different values of when and . here, we again observe results similar to those in fig .[ fig : fig7 ] .we obtain higher throughput by sensing more than one channel in the presence of strict interference constraints on cognitive radios .in this paper , we have studied the performance of cognitive transmission under qos constraints and interference limitations .we have considered a scenario in which secondary users sense multiple channels and then select a single channel for transmission with rate and power that depend on both sensing decisions and fading .we have constructed a state transition model for this cognitive operation .we have meticulously identified possible scenarios and states in which the secondary users operate .these states depend on sensing decisions , true nature of the channels being busy or idle , and transmission rates being smaller or greater than the instantaneous channel capacity values .we have formulated and imposed an average interference constraint on the secondary users . under such interference constraints and also statistical qos limitations in the form of buffer constraints ,we have obtained the maximum throughput through the effective capacity formulation .therefore , we have effectively analyzed the performance in a practically appealing setting in which both the primary and secondary users are provided with certain service guarantees .we have determined the optimal power adaptation strategies and the optimal channel selection criterion in the sense of maximizing the effective capacity .we have had several interesting observations through our numerical results .we have shown that improving the reliability of channel sensing expectedly increases the throughput .we have noted that sensing multiple channels is beneficial only under relatively strict interference constraints . at the same time , we have remarked that sensing multiple channels can decrease the chances of a primary user being interfered .v. asghari and s. aissa , rate and power adaptation for increasing spectrum efficiency in cognitive radio networks , " _ ieee international conference on communications , dresden , germany , jun .14 - 18 , 2009 . _ s. akin and m.c .gursoy , effective capacity analysis of cognitive radio channels for quality of service provisioning , " _ ieee global communication conference , honolulu , hawaii , nov .30 - dec . 4 , 2009 . | in this paper , the performance of cognitive transmission under quality of service ( qos ) constraints and interference limitations is studied . cognitive secondary users are assumed to initially perform sensing over multiple frequency bands ( or equivalently channels ) to detect the activities of primary users . subsequently , they perform transmission in a single channel at variable power and rates depending on the channel sensing decisions and the fading environment . a state transition model is constructed to model this cognitive operation . statistical limitations on the buffer lengths are imposed to take into account the qos constraints of the cognitive secondary users . under such qos constraints and limitations on the interference caused to the primary users , the maximum throughput is identified by finding the effective capacity of the cognitive radio channel . optimal power allocation strategies are obtained and the optimal channel selection criterion is identified . the intricate interplay between effective capacity , interference and qos constraints , channel sensing parameters and reliability , fading , and the number of available frequency bands is investigated through numerical results . _ keywords : _ channel sensing , cognitive transmission , effective capacity , energy detection , interference constraints , nakagami fading , power adaptation , quality of service constraints , rayleigh fading , state - transition model . 1.65 |
counting the number of solutions for constraint satisfaction problem , denoted by # csp , is a very important problem in artificial intelligence ( ai ) . in theory ,# csp is a # p - complete problem even if the constraints are binary , which has played a key role in complexity theory . in practice ,effective counters have opened up a range of applications , involving various probabilistic inferences , approximate reasoning , diagnosis , and belief revision . in recent years , many attentions have been focused on counting a specific case of # csp , called # sat . by counting components , bayardo and pehoushek presented an exact counter for sat , called relsat [ 1 ] . by combining componentcaching with clause learning together , sang et al .created an exact counter cachet [ 2 ] .based on converting the given cnf formula into d - dnnf form , which makes the counting easily , darwiche introduced an exact counter c2d [ 3 ] . by introducing an entirely new approach of coding components, thurley addressed an exact counter sharpsat [ 4 ] . by using more reasoning , davies and bacchus addressed an exact counter # 2clseq [ 5 ] .besides the emerging exact # sat solvers , wei and selman presented an approximate counter approxcount for sat by using markov chain monte carlo ( mcmc ) sampling [ 6 ] .building upon approxcount , gomes et al . used sampling with a modified strategy and proposed an approximate counter samplecount [ 7 ] .relying on the properties of random xor constraints , an approximate counter mbound was introduced in [ 8 ] . using sampling of the backtrack - free search space of systematicsat solver , sampleminisat was addressed in [ 9 ] .building on the framework of samplecount , kroc et al . exploited the belief propagation method and presented an approximate counter bpcount [ 10 ] . by performing multiple runs ofthe minisat sat solver , kroc et al .introduced an approximate counter , called minicount [ 10 ] .recently , more efforts have been made on the general # csp problems .for example , angelsmark et al . presented upper bounds of the # csp problems [ 11 ] .bulatov and dalmau discussed the dichotomy theorem for the counting csp [ 12 ] .pesant exploited the structure of the csp models and addressed an algorithm for solving # csp [ 13 ] .dyer et al . considered the trichotomy theorem for the complexity of approximately counting the number of satisfying assignments of a boolean csp [ 14 ] .yamakami studied the dichotomy theorem of approximate counting for complex - weighted boolean csp [ 15 ] .though great many studies had been made on the algorithms for the # csp problems , only a few of them related to the # csp solvers .gomes et al .proposed a new generic counting technique for csps building upon on the xor constraint [ 16 ] . by adapting backtracking with tree - decomposition , favier et al .introduced an exact # csp solver , called # btd [ 17 ] .in addition , by relaxing the original csp problems , they presented an approximate method approx # btd [ 17 ] . in this paper, we propose a new type of method for solving # csp problems .the method derives from the partition function based on introducing the free energy and capturing the relationship of probabilities of variables and constraints .when computing the number of the solutions of a given csp formula according to the partition function , we require the marginal probabilities of each variable and each constraint to plug into the partition function . in order to obtain the marginal probabilities , we employ the belief propagation ( bp ) because it can organize a computation that makes the marginal probabilities computing tractable and eventually returns the marginal probabilities .in addition , unlike the counter bpcount using the belief propagation method for obtaining the information deduced from solution samples in samplecount , we employ the belief propagation method for acquiring information for partition function .this leads to two differences between bpcount and our counter .the first one is the counter bpcount requires to iteratively perform the belief propagation method and repeatedly obtain the marginal probabilities of each variable on the simplified sat formulae ; while our counter carries out the belief propagation method only once , which spends less cost .the second one is that the two counters obtaining the exact number of solutions depending on different circumstances .the counter bpcount needs the corresponding factor graphs of the simplified sat formulae all have no cycles ; while our counter only needs the factor graph of the given csp formula has no cycle , which meets easily .our experiments reveal that our counter for csp , called pfcount , works quite well .we consider various hard instances , including the random instances and the structural instances .for the random instances , we consider the instances based on the model rb close to the phase transition point , which has been proved the existence of satisfiability phase transition and identified the phase transition points exactly .with regard to the random instances , our counter pfcount improves the efficiency tremendously especially for instances with more variables .moreover , pfcount presents a good estimate to the number of solutions for instances based on model rb , even if the instances scales are relatively large .therefore , the effectiveness of pfcount is much more evident especially for random instances . for the structural instances , we focus on the counting problem based on graph coloring .the performance of pfcount for solving structural instances is in general comparing with the random instances because pfcount sometimes ca nt converge .however , once pfcount can converge , it can estimate the number of the solutions of instances efficiently . as a whole , pfcount is a quite competitive # csp solver .a constraint satisfaction problem ( csp ) is defined as a pair , where is a set of variables and is a set of constraints defined on _v_. for each variable in _ v _ , the domain of is a set with values ; the variable can be only assigned a value from . a constraint _c _ , called a _ k_-ary constraint , consists of _ k _ variables and a relation , where , , ... , are distinct .the relation specifies all the allowed tuples of values for the variables which are compatible with each other .the variable configuration of a csp is that assigns each variable a value from its domain .a solution to a constraint is a variable configuration that sets values to each variable in the constraint such that .we also say that the variable configuration satisfies the constraint .a solution to a csp is a variable configuration such that all the constraints in _ c _ are satisfied . given a csp ,the decision problem is to determine whether the csp has a solution .the corresponding counting problem ( # csp ) is to determine how many solutions the csp has .a csp can be expressed as a bipartite graph called factor graph ( see fig .1 ) . the factor graph has two kinds of nodes , one is variable node ( which we draw as circles ) representing the variables , and the other is function node ( which we draw as squares ) representing the constraints .a function node is connected to a variable node by an edge if and only if the variable appeares in the constraint . in the rest of this paper, we will always index variable nodes with letters starting with _i _ , and factor nodes with letters starting with .in addition , for every variable node _ i _ , we will use _ v_(_i _ ) to denote the set of function nodes which it connects to , and _v_(_i_) to denote the set _v_(_i _ ) without function node . similarly ,for each function node , we will use _v_( ) to denote the set of variable nodes which it connects to , and _i _ to denote the set _v_( ) without variable node _ i_. .the csp is encoded as , where _v_( ) = \{1 , 2 , 3 } , _v_( ) = \{1 , 4 , 5 , 6 } , _v_( ) = \{1 , 7 , 8}. ]in this section , we present a new approximate approach , called pfcount , for counting the number of solutions for constraint satisfaction problem .the approach derives from the partition function based on introducing the free energy and capturing the relationship of probabilities of variables and constraints . in the following , we will describe the partition function in details . in this subsection, we present a partition function for counting the number of solutions for csp .the partition function is an important quantity in statistical physics , which describes the statistical properties of a system .most of the aggregate thermodynamic variables of the system , such as the total energy , free energy , entropy , and pressure , can be expressed in terms of the partition function . to facilitate the understanding ,we first describe the notion of the partition function . given a system of _ n _ particles , each of which can be in one of a discrete number of states , i.e. , , and a state of the system _ x _ denoted by , i.e.i__th particle is in the state , the partition function in statistical physics is defined as where _t _ is the temperature , _ e_(_x _ ) is the energy of the state _ x _ , and _ p_(_x_ ) is the probability of the state _x_. in this paper , we focus on the partition function that the temperature _ t _ is assigned to 1 . since the partition function is also used in probability theory , in the following we will learn the partition function from the probability theory .given a csp and a variable configuration of , the partition function in probability theory is defined in equation ( 2 ) . where _p_(_x _ ) is the the joint probability distribution , function is a boolean function range \{0 , 1 } , which evaluates to 1 if and only if the constraint is satisfied , evaluates to 0 otherwise ; and _ m _ is the number of constraints .based on equation ( 2 ) , the joint probability distribution _ p_(_x _ ) over the _n _ variables can be expressed as the follows . because the construction of the joint probability distribution is uniform over all variable configurations , _ z _ is the number of solutions of the given csp .therefore , # csp can be solved by computing a partition function . in the following, we will propose the derivation of the partition function . in order to present a calculation method to compute the partition function, we introduce the variational free energy defined by where _e_(_x _ ) is the energy of the state _ x _ and _ b_(_x _ ) is a trial probability distribution . simplifying the equation ( 4 ) , we draw up the following equation . by setting _ t _ to 1 in equation ( 1 ) , we can obtain : then we take the equation ( 6 ) into ( 5 ) and acquire : since _b_(_x _ ) is a trial probability distribution , the sum of the probability distribution should be 1 , i.e. . then the equation ( 7 ) can be expressed as the follows . by analyzing the equation ( 8) , we know that the second term is equal to zero if_ b_(_x _ )is equal to _ p_(_x _ ) .b_(_x _ ) is equal to _ p_(_x _ ) , the partition function can be written as then by taking the equation ( 4 ) into the above equation , we obtain for a factor graph with no cycles , _ p_(_x _ ) can be easily expressed in terms of the marginal probabilities of variables and constraints as the follows . where is the number of times that the variable occurs in the constraints , _ m _ and _n _ are the number of constraints and variables respectively , and are the marginal probabilities of constraints and variables respectively . in addition , by analyzing the two partition functions presented in equations ( 1 ) and ( 2 ) , we can see that _p_(_x _ ) and _ z _ are equal when _t _ is set to 1 .thus , we can obtain the following equation from equation ( 1 ) and equation ( 2 ) on account of the equivalents _ z _ and _ p_(_x _ ) . then the partition function can be expressed as the follows by plugging the equation ( 11 ) and equation ( 12 ) into equation ( 10 ) . in equation ( 13 ) , when the variable configuration _ x _ is a solution to a csp , the function is assigned 1 , which means that the term evaluates to 0 .on the other hand , when the variable configuration _ x _ is not a solution to a csp , the term evaluates to 0 .therefore , whether or not the variable configuration _ x _ is a solution to a csp , the first term in the exponential function must evaluate to 0 . then equation ( 13 )can be expressed as the follows . from the above equation , we can learn that the number of solutions of a given csp can be calculated according to the partition function if the marginal probability of variable and the marginal probability of constraint can be obtained . in the following, we will present an approach to compute the marginal probabilities . in this subsection, we address a method bp to calculate the marginal probabilities . the belief propagation , bp for short ,is a message passing procedure , which is a method for computing marginal probabilities [ 18 ] .the bp procedure obtains exact marginal probabilities if the factor graph of the given csp has no cycles , and it can still empirically provide good approximate results even when the corresponding factor graph does have cycles . to describe the bp procedure , we first introduce messages between function nodes and their neighboring variable nodes and vice versa .the message passed from a function node to one of its neighboring variable nodes _ i _ can be interpreted as the probability of constraint being satisfied if the variable takes the value ; while the message passed from a variable node _i _ to one of its neighboring function nodes can be interpreted as the probability that the variable takes the special value in the absence of constraint .next we concentrate on presenting the details of the bp procedure ( see fig .at first , the message $ ] is initialized for every edge ( , _ i _ ) and every value .then the messages are updated with the following equations . where is a normalization constant ensuring that is a probability , and is a characteristic function taking the value 1 if the variable configuration satisfies the constraint , taking the value 0 otherwise .the bp procedure runs the equations ( 15 ) and ( 16 ) iteratively until the message converges for every edge ( , _ i _ ) and every value .when they have converged , we can then calculate the marginal probabilities of each variable and each constraint in the following equations . where _ c_ and _ c ` " ` _ are normalization constants ensuring that and are probabilities , and is a characteristic function taking the value 1 if the variable configuration satisfies the constraint , taking the value 0 otherwise . as a whole, the bp procedure organizes a computation that makes the marginal probabilities computing tractable and eventually returns the marginal probabilities of each variable and each constraint which can be used in the partition function . as we know , the bp procedure can present exact marginal probabilities if the factor graph of the given csp has no cycle . and from the whole derivation of the partition function , we understand that all equations address exact results if the factor graph is a tree .thus , we obtain the following theorem .the method pfcount provides an exact number of solutions for a csp if the factor graph of the given csp has no cycle .the above theorem illustrates that pfcount can present an exact number of solutions if the corresponding factor graph of the given csp has no cycle . in addition , even when the factor graph does have cycles , our method still empirically presents good approximate number of solutions for csp .in this section , we perform two experiments on a cluster of 2.4 ghz intel xeon machines with 2 gb memory running linux centos 5.4 .the purpose of the first experiment is to demonstrate the performance of our method on random instances ; the second experiment is to compare our method with two other methods on structural instances .our # csp solver is implemented in c++ , which we also call pfcount . for each instance , the run - time is in seconds and the timeout limit is 7200s . in this subsection , we conduct experiments on csp benchmarks of model rb , which can provide a framework for generating asymptotically hard instances so as to give a challenge for experimental evaluation of the # csp solvers [ 19 ] .the benchmarks of model rb is determined by parameters ( _ k _ , _ n _ , , _ r _ , _ p _ ) , where _ k _ denotes the arity of each constraint ; _ n _ denotes the number of variables ; determines the domain size of each variable ; _ r _ determines the number of constraints ; _ p _ determines the number of disallowed tuples of each relation .table 1 illustrates the comparison of pfcount with state - of - the - art exact # sat solvers sharpsat , c2d , cachet , and approximate solvers approxcount , bpcount on csp benchmarks of model rb close to the phase transition point . in the experiment, we choose random instances ( 2 , _n _ , 0.8 , 3 , _ p _ ) with _ n_ \{10 , 20 , 30 , 40}. moreover , since the theoretical phase transition point 0.23 for _ k _ = 2 , = 0.8 , _ r _ = 3 , _ \{10 , 20 , 30 , 40 } , we set _ p _ = 0.20 . in table 1, the instance frb__a__-_b_-_c _ represents the instance containing _ a _ variables , owning a domain with _b _ values for each variable , and indexing _ c _ ; _ exacts _ represents the exact number of solutions of each instance ; _ apps _ represents the approximate number of solutions of each instance .note that the exact number of solutions of each instance is obtained by the exact # sat solvers and the csp instances solved by these # sat solvers are translated into sat instances using the direct encoding method .the results reported in table 1 suggest that the effectiveness of pfcount is much more evident especially for larger instances .for example , the efficiency of solving instances with 30 variables has been raised at least 74 times ( instance frb30 - 15 - 1 ) . and the instances with 40 variables can be solved by pfcount in a few seconds . furthermore , pfcount presents a good estimate to the number of solutions for csp benchmarks of model rb .even if the instances scales are relatively large , the estimates are found to be over 63.129% correct except the instance frb20 - 11 - 3 . therefore , this experiment shows that pfcount is quite competitive compared with the other counters .table 2 presents the performance of our counter pfcount on hard instances based on model rb .these instances provide a challenge for experimental evaluation of the csp solvers . in the 1st international csp solver competition in 2005 ,these instances ca nt be solved by all the participating csp solvers in 10 minutes . in the recent csp solver competition, only one solver can solve the instances frb50 - 23 - 1 , frb50 - 23 - 4 , frb50 - 23 - 5 , frb53 - 24 - 5 , frb56 - 25 - 5 , and frb59 - 26 - 5 ; only two solvers can solve the instance frb53 - 24 - 3 ; four solvers can solve the instance frb53 - 24 - 1 ; and the rest of instances still ca nt be solved in 10 minutes . in table 2 , _ apps _ represents the approximate number of solutions of each instance ; _ expecteds _ represents the expected number of solutions of each instance , which can be calculated as the following according to the definition of model rb [ 19 ] . where _n _ denotes the number of variables ; determines the domain size of each variable ; _ r _ determines the number of constraints ; _ p _ determines the number of disallowed tuples of each relation . when _n _ tends to infinite , _ expecteds _ is the number of solutions of the instances based on model rb . empirically ,when _ n _ is not very large , _ expecteds _ and the exact number of solutions are in the same order of magnitude .therefore , _ expecteds _precisely estimates the number of solutions of the instances based on model rb . by analyzing the results in table 2 , we can see that pfcount efficiently estimates the number of solutions of these hard csp instances .it should be pointed out that our pfcount is only capable of estimating the numbers of solutions rather enumerating the solutions . in this subsection, we carry out experiments on graph coloring instances from the dimacs benchmark set .the # csp solvers compared with pfcount are ilog solver 6.3 [ 20 ] and csp+xors .table 3 illustrates the results of the comparison of the # csp solvers on graph coloring instances . in this table , _ apps _ is the approximate number of solutions of each instance ; _ exacts _ is the exact number of solutions of each instance calculated by the exact counters .note that the results presented by the ilog solver and csp+xors are based on [ 16 ] for lack of the binary codes . as can be seen from table 3, pfcount does nt give good estimate on these structural instances in contrast with the random instances .however , the run - time of pfcount clearly outperforms other # csp solvers greatly .this paper addresses a new approximate method for counting the number of solutions for constraint satisfaction problem .it first obtains the marginal probabilities of each variable and constraint by the belief propagation approach , and then computes the number of the solutions of a given csp formula according to a partition function , which obtained by introducing the free energy and capturing the relationship between the probabilities of the variables and the constraints .the experimental results also show that the effectiveness of our method is much more evident especially for larger instances close to the phase transition point .sang , t. , bacchus , f. , beame , p. , kautz , h. a. , and pitassi , t. : combining component caching and clause learning for effective model counting.in : 7th theory and applications of satisfiability testing .springer , heidelberg ( 2004 ) gomes , c. p. , hoffmann , j. , sabharwal , a. , and selman , b. : from sampling to model counting . in : 20th international joint conference on artificial intelligence , pp .springer , heidelberg ( 2007 ) kroc , l. , sabharwal , a. , and selman , b. : leveraging belief propagation , backtrack search , and statistics for model counting . in : 5th integration of ai and or techniques in contraint programming for combinatorial optimzation problems , pp. 127141 .springer , heidelberg ( 2008 ) angelsmark , o. , jonsson , p. , linusson , s. , and thapper j. : determining the number of solutions to binary csp instances . in : 8th principles and practice of constraint programming , pp .springer , heidelberg ( 2002 ) bulatov , a. , dalmau , v. : towards a dichotomy theorem for the counting constraint satisfaction problem . in : 44th annual ieee symposium on foundations of computer science , pp .ieee computer society , los alamitos , ca ( 2003 ) gomes , c. p. , hoeve , w. j. , sabharwal , a. , and selman , b. : counting csp solutions using generalized xor constraints . in : national conference on artificial intelligence , pp .aaai press , u.s . | we propose a new approximate method for counting the number of the solutions for constraint satisfaction problem ( csp ) . the method derives from the partition function based on introducing the free energy and capturing the relationship of probabilities of variables and constraints , which requires the marginal probabilities . it firstly obtains the marginal probabilities using the belief propagation , and then computes the number of solutions according to the partition function . this allows us to directly plug the marginal probabilities into the partition function and efficiently count the number of solutions for csp . the experimental results show that our method can solve both random problems and structural problems efficiently . function ; # csp ; belief propagation ; marginal probability . |
many organizations rely on the skills of innovative individuals to create value .examples include academic institutions , government organizations , think tanks , and knowledge - based firms .workers in these organizations apply a variety of skills to in order to solve difficult problems : architects design buildings , biochemists develop new drugs , aeronautical engineers create bigger and better rockets , software developers create new applications , and industrial designers create better packaging materials .their success and thus the success of the organizations they work for is dependent on the particular set of skills that they have at their disposal , but in most cases , the decision of which skills to acquire is made by individuals , rather than organizations .the perception is that these workers choose to become more specialized as the problems they face become more complex ( strober ( 2006 ) ) .this perception has generated a countervailing tide of money and institutional attention focused on promoting interdisciplinary efforts .however , we have very little real understanding of what drives an individual s decision to specialize .roughly speaking , workers in knowledge - based fields can be divided into two categories : specialists , who have a deep knowledge of a single area , and generalists , who have knowledge in a wide variety of areas . in this paper ,i consider an individual s decision to be a specialist or a generalist , looking specifically at two previously unaddressed questions .first , under what conditions does it make sense for an individual to acquire skills in multiple areas ? and second , are the decisions made by individuals optimal from an organizational perspective ?most of the work done on specialists and generalists is focused on the roles the two play in the economy .collins ( 2001 ) suggests that specialists are more likely to found successful companies .lazear ( 2004 and 2005 ) , on the other hand , suggests that the successful entrepreneurs should be generalists a theory supported by astebro and thompson ( 2011 ) , who show that entrepreneurs tend to have a wider range of experiences than wage workers .tetlock ( 1998 ) finds that generalists tend to be better forecasters than specialists .in contrast , a wide variety of medical studies ( see , for example , hillner et al ( 2000 ) and nallamothu et al ( 2006 ) ) , show that outcomes tend to be better when patients are seen by specialists , rather than general practitioners .however , none of this work considers the decision that individuals make with respect to being a specialist or generalist .while some people will always become generalists due to personal taste , the question remains : is it ever rational to do so in the absence of a preference for interdisciplinarity ? and is the decision that the individual makes optimal from a societal perspective ?there is evidence that being a generalist is costly .adamic et al ( 2010 ) show that in a wide variety of contexts , including academic research , patents , and contributions to wikipedia , the contributions of individuals with greater focus tend to have greater impact , indicating that there is a tradeoff between the number of fields an individual can master , and her depth of knowledge in each .this should not be surprising .each of us has a limited capacity for learning new things by focusing on a narrow field of study , specialists are able to concentrate their efforts and maximize the use of that limited capacity , while generalists are forced to spread themselves more thinly in the pursuit of a wider range of knowledge . in the language of economics ,generalists pay a fixed cost for each new field of study they pursue , in the form of effort expended learning new jargon , establishing new social contacts in a field , and becoming familiar with new literatures .given that it is costly to diversify ones skills , the decision to become a generalist can be difficult to rationalize . in this paper ,i examine model in which workers decide whether to be specialists or generalists to explore conditions under which it is rational for an individual to choose to be a generalist .i show that when problems are single - dimensional and there are no barriers to working on problems in other disciplines , the equilibrium population contains only specialists .however , when there are barriers to working on problems in other fields ( eg : communication barriers or institutional barriers ) then there is a tradeoff between the depth of study of the specialist and the wide scope of problems that the generalist can work on . when problems are relatively simple , generalists dominate because their breadth of experience gives them a wider variety of problems to work on .but as problems become more difficult , depth wins out over scope , and workers tend to specialize .i then show that the equilibrium decisions reached by individuals are not necessarily socially optimal .as problems become harder , individual workers are more likely to specialize , but as a society , we would prefer that some individuals remain generalists .this disconnect reflects the fact that from a social perspective , we would prefer to have researchers apply the widest possible variety of skills to the problems we face , but individuals internalize the cost of obtaining those skills .thus , the model predicts that some populations will suffer from an undersupply of generalists . in such populations, it would be socially beneficial to subsidize the acquisition of skills in broad subject areas .finally , i consider an extension of the model in which problems have multiple parts .this allows me to consider problems that are explicitly multidisciplinary that is , when different parts of a problem are best addressed using skills from different disciplines .i show when problems are multidisciplinary , it is possible to rationalize being a generalist , even when there are no disciplinary boundaries . in particular , when there is a large advantage to applying the best tool for the job , being a generalist is optimal .i construct a two period model . in period 1 , the workers face a distribution of problems and each worker chooses a set of skills . in period 2 ,a problem is drawn from the distribution , and the workers attempt to solve it using the skills they acquired in period 1 .i will solve for the equilibrium choice of skills in period 1 .let be the set of all possible skills .the skills are arranged into 2 disciplines , and , each with skills , .an example with six skills arranged into two disciplines is shown in figure [ fig : two - disciplines ] .two disciplines , each with three skills . ] a _ specialist _ is a person who chooses skills within a single discipline .a _ generalist _ is a person who chooses some skills from both disciplines .a problem , , is a task faced by the workers in the model .a _ skill _ is a piece of knowledge that can be applied to the problem in an attempt to solve it .each skill has either a high probability or a low probability of solving the problem ._ _ i will define a problem by the matrix of probabilities that each skill will solve the problem .that is , ] where the expectation is taken over the distribution of problems , .the vector of probabilities in the two disciplines , ] and <e\left[p\left(g\right)\right] ] where if skill in discipline has a high probability of solving part and if it has a low probability of solving part . as in the previous section, i will assume that for each part of the problem , skills are independent _ _( uncorrelated with ) and skills are symmetric within disciplines ( ) . as before , the probability that a given skill is an skillis not known _ ex ante ._ however , the workers know the expected probability that a skill is an skill. i will allow the expected probabilities to vary across parts of the problem in other words , it is possible that a discipline will be more useful in solving one of the parts of the problem than in solving the other part of the problem .let be the probability that a skill from discipline is an h skill for part of the problem .that is , ] describes a distribution of problems , , and is known _ ex ante . _an entry in the column of that matrix is the vector of probabilities that a skill in each of the disciplines will be useful for solving part of the problem .we can categorize the problems according to the relative usefulness of the two disciplines in the two parts of the problem .there are two categories for the problems : 1 .one discipline is as or more useful for both parts of the problem : 2 .one discipline is more useful for part 1 and the other discipline is more useful for part 2 : and these categories are illustrated in figure [ fig : taxonomy of distributions ] . ]if a problem falls into the first category , then the results are similar to those obtained in section [ sec : specialization and barriers between disciplines ] .in particular , if there are no barriers to working on problems in other disciplines , then all workers will specialize .if a problem falls into the second category , then the results do not resemble any of those already explored .problems with multiple parts , each of which is best addressed within the context of a different discipline , are often referred to as _ multidisciplinary ._ generalists have an advantage in multidisciplinary problems , because they can apply different types of skills to different parts of a problem .for example , suppose a scientist is look at nerve conduction in an organism .that problem may have elements are are best addressed using biological tools , and other elements that are best addressed using physics tools .an individual with both biology and physics skills will have an advantage over someone who is forced to use ( for example ) physics skills to solve both parts of the problem .the below states that when problems are multidisciplinary , it can be rational to be a generalist , even in the absence of barriers to working in other fields . more formally ,suppose that if a worker uses the `` right '' discipline for a part of a problem , then there is a probability that a skill in that discipline is useful ( when is the right discipline to use for part of the problem ) .if she uses the `` wrong '' discipline , then there is a probability that a skill in that discipline is useful ( when is the wrong discipline to use for part of the problem ) .this is without loss of generality , because the only thing that makes a problem multidisciplinary is the ordering of the usefulness of the disciplines .further , let and .these represent the probability that a skill in the right discipline will not solve a part of a problem and the probability that a skill in the wrong discipline will not solve part of the problem .note that .when the efficacy of the two disciplines is very different ( ) , then using the right skill for the job has a large effect on the probability of solving the problem as a whole , and it will be rational to obtain skills in multiple disciplines .figure [ fig : regions of specialization in two part problem ] illustrates the region in which individuals choose to be generalists and specialists , and theorem [ thm : multidisciplinary problems ] summarizes the result .equilibrium skill acquisition decisions when problems are multidisciplinary , and and . ][ thm : multidisciplinary problems]if skills are independent and symmetric within discipline , and problems multidisciplinary ( eg : ] with . a specialist in discipline will have skills in discipline .the expected probability that her skills will solve _ both _ parts of the problem is *e\left[p\left(\mbox{success on part 2}\right)\right] ] .thus , to determine whether any individual will generalize , i need to compare ] .the _ ex ante _ probability that a generalist with skills in discipline 1 , and skills in discipline 2 solves a problem from a given distribution , , is & = & 1-\left(\delta_{1}h+\left(1-\delta_{1}\right)l\right)^{x}\left(\delta_{2}h+\left(1-\delta_{2}\right)l\right)^{k - c - x}\\ & = & 1-\pi_{1}^{x}\pi_{2}^{k - c - x}\end{aligned}\ ] ] since , ] , which is clearly less than =1-\pi_{1}^{k} ] is strictly increasing in .thus , a generalist will choose a minimal number of skills in the less useful discipline , and =1-\pi_{1}^{k - c-1}\pi_{2} ] and <e\left[p\left(g\right)\right]$ ] . setting implies that .setting implies that .we can verify that in the appropriate ranges , individuals choose to specialize .the result follows immediately .the proof for is similar . for the proof when , see theorem [ thm : communication barriers and generalists ] . finally , theorem [ thm : optimality of the equilibrium-1 ] is the generalization of theorem [ thm : optimality of the equilibrium ] .it states that there is a parameter region in which individuals choose to specialize , but society would prefer to have at least a few generalists .[ thm : optimality of the equilibrium-1]if skills are independent and symmetric within discipline , and there are barriers to working on problems in other disciplines , then there is a range of values for ( the fraction of problems assigned to discipline 1 ) such that generalists are underprovided in the equilibrium population of problem solvers . first , suppose that . the probability that at least one of the problem - solvers in the population solves the problem is .if all of the individuals in the population are specialists in discipline 1 , then every individual has probability of a problem occurring in her discipline . in that case, each specialist in discipline has a probability of solving the problem and of not solving it . with probability , the problem is assigned to the other discipline , and no specialist solves it .thus , the probability of someone in a population of specialists solving the problem is \\ & = & 1-\left[\phi prob\left(\mbox{one fails}\right)^{n}+\left(1-\phi\right)*1\right]\\ & = & 1-\left[\phi\left(\pi_{1}^{k}\right)^{n}+\left(1-\phi\right)*1\right]\\ & = & \phi\left(1-\pi_{1}^{kn}\right)\end{aligned}\ ] ] society is better off with a population of generalists than a population of discipline 1 specialists when , which is true when .however , there is a population of generalists when .it is always the case that .thus , if , then society is better off with a population of generalists , but has a population of specialists . through a similar argument ,society is better off with a population of generalists than a population of discipline 2 specialists when , which is true when .however , there is a population of generalists when .it is always the case that .thus , if , then society is better off with a population of generalists , but has a population of specialists . | many organizations rely on the skills of innovative individuals to create value , including academic and government institutions , think tanks , and knowledge - based firms . roughly speaking , workers in these fields can be divided into two categories : specialists , who have a deep knowledge of a single area , and generalists , who have knowledge in a wide variety of areas . in this paper , i examine an individual s choice to be a specialist or generalist . my model addresses two questions : first , under what conditions does it make sense for an individual to acquire skills in multiple areas , and second , are the decisions made by individuals optimal from an organizational perspective ? i find that when problems are single - dimensional , and disciplinary boundaries are open , all workers will specialize . however , when there are barriers to working on problems in other fields , then there is a tradeoff between the depth of the specialist and the wider scope of problems the generalist has available . when problems are simple , having a wide variety of problems makes it is rational to be a generalist . as these problems become more difficult , though , depth wins out over scope , and workers again tend to specialize . however , that decision is not necessarily socially optimal on a societal level , we would prefer that some workers remain generalists . skill acquisition , specialization , jack - of - all - trades , problem solving , knowledge based production , human capital _ jel codes : j24 , o31 , d00 , m53 , i23 _ _ thanks to scott page and ross oconnell . this work was supported by the nsf and the rackham graduate school , university of michigan . computing resources supplied by the center for the study of complex systems , university of michigan . |
power variability is one of the main obstacles facing renewable energy expansion .solar and wind are of main concern since their electrical production is proportional to the variability associated with wind and solar resources .this poses an issue for current electrical infrastructure which was designed around variability in demand , not generation .transient local demand changes over seconds or minutes are for the most part small and spatially uncorrelated resulting a relatively steady demand profile . over several hours, loads can change substantially , but these changes in load have a tendency to be more predictable .this is manifested through daily patterns of morning load pickup and evening load drop - off highly correlated with human activity .wind and solar generation , on the other hand , is variable .an individual wind turbine or solar plant can ramp from full to less than half of production in a minute .on the other hand the aggregate variability of multiple turbines at the same site or even all renewable generators in a balancing area is relatively much less .still such variability is not entirely predictable and therefore causes uncertainty in projecting power output minutes to days ahead .variability and uncertainty are more critical in standalone or island mode applications where a high penetration of renewable power sources ramping near synchronously may create power variability that is large enough to cause substantial power quality and/or grid economics issues .an approach to solve this problem is to increase scheduling and adjustments to controllable loads to `` load follow '' wind and solar generation on the grid .examples of controllable loads are devices such as air conditioners and refrigerators with a temperature dead band that effectively creates a thermal storage reservoir .the load power of these type of machines can be changed temporarily while respecting the demands of the end user .another example is scheduling of more intermittent loads such as water pumps , which can be adjusted to accommodate power variability , claim optimal power usage , and decrease power losses . the control and scheduling of such loads benefits from supply and load forecast .+ load scheduling has been applied in many fields , such as thermal loads , residential appliances , and ev charging . in , a case studywas implemented to accomodate wind power variability through ev charging .another example for household appliances scheduling was demonstrated in . from the supply side, discussed new dispatch methodology to power and control a hybrid wind turbine and battery system . in ,more work was done toward game theory and customers effect on the grid .+ model predictive control ( mpc ) was used in most approaches to compute optimal control or scheduling signals for the load .typically , in mpc a constraint ( quadratic ) optimization problem is solved iteratively over a finite and moving time horizon from till to compute an optimal control signal in real time , denoted here by at time .countless examples of innovative mpc based approaches for load scheduling , grid tied storage systems or to maintain voltage stability can be found in e.g. , , or .although mpc approaches are extremely powerful in computing optimal control signals over a moving but finite time horizon , typically the control signal is allowed to attain any real value during the optimization , see e.g. .unfortunately , a real - valued control signal would require distinctive loads on an electric grid to operate at fractional load demands .although this can be implemented by electric storage systems or partial or pulse width modulation of loads , ( non - linear ) dynamic power profiles of the electric loads in terms of dynamic power ramp up / down and minimum time on / off of each load is harder to implement in a standard mpc framework .+ in this paper we define load scheduling as the optimal on / off combinations and timing of a set of distinct electric loads via the computation of an optimal binary control signal . the work is partially motivated by previous work and in which the design and sizing problem of a standalone photovoltaic reverse osmosis ( ro ) system is considered , where the ro loads are to be scheduled on / off .the work in computes the optimal size and number of units for a selected location but lacks the procedure for optimally scheduling dynamic loads .here we aim to find optimal load scheduling by on / off switching of possibly non - linear dynamics electric loads .+ the mpc optimization problem becomes untractable for binary load switching because of the exponentiation growth of the binary combinations in the length of the prediction horizon and the number of loads .constraints on the allowable load switching help to alleviate the combinatorial problem , making mpc optimization with binary switching computationally feasible .the approach presented in this paper will be illustrated in a simulation study in which each load has its own dynamics for both turning on and shutting off .solar forecasting data on a partly cloudy morning and a clear afternoon at uc san diego is used to illustrate how loads are scheduled to turn on / off dynamically to track solar power predictions .the moving horizon nature of forecasts serve as an ideal input to the mpc algorithm . with the finite prediction horizon in mpcit is crucial to have reliable forecast of power delivery .however , in reality cloud advection forecasts become less accurate over longer horizons as the cloud dynamics render the basic assumptions of static clouds invalid .this inaccuracy leads to error in the mpc decision . either an overprediction can cause loads to activate during a period where there is not enough solar energy to meet the demand or an underprediction can prevent loads from being schedule even though energy would have been available .for this reason , we will investigate mis - scheduling due to forecast errors .+ section [ sec : sdlm ] introduces the dynamic loads assumptions in terms of dynamic power ramping and on / off time constraints .section [ sec : dls ] gives the approach for dynamic load scheduling based on power tracking over a moving prediction horizon of points , with an admissible set of binary switching combinations . in section [ sec : aex ] different solar forecast methods are considered and the effect of forecast errors on the load scheduling is investigated .advective , persistence and perfect forecasts are used as inputs to the load scheduling algorithm to show how forecasting errors can lead to scheduling errors .the paper is ended by concluding remarks in section [ sec : ccls ] .we consider a fixed number of loads where the power demand , as a function of time for each load is modeled by a known switched dynamic system .for the dynamic scheduling of the loads , loads are assumed to be switched `` on '' or `` off '' by a binary switching signal .each load is also assumed to have a known minimum duration for the `` off '' time of the load when and a minimum duration for the `` on '' time of the load when .the duration times and avoid unrealistic on / off chattering of the switch signal during load scheduling and limit the number of transitions in over a finite optimization period . this can be also an equipment safety or operational constrains . with the minimum on / off duration times and the finite time period for load switching , on / off switching of a load at time can now be formalized .special care should be given to turning on loads at close to the final time which is also depends on the equipments or loads type . for the formalization, the load switching signal will be a combination of an `` on '' signal and an `` off '' signal that both take into account the constraints of minimum on / off duration and the finite time for load switching . as a result ,the admissible on / off transition signal of a load at time can now be formalized by the switching signal where denotes the most recent ( last ) time stamp at which the load was switched `` off '' , and the opposite goes to . for the computational results presented in this paper ,linear first order continuous - time dynamic models will be used to model the dynamics of the power demand of the loads .it should be pointed out that the computational analysis is not limited to the use of linear first order models , as long as the dynamic models allow the numerical computation of power demand as a function of the switching signal . to allow different dynamics for the time dependent power demands whenthe binary switching signal transitions from 0 to 1 ( `` on '' ) or transitions from 1 to 0 ( `` off '' ) , different time constants are used in the first order models .this allows power demands to be modeled at different rates when switching loads . referring back to the admissible on / off transition signals and respectively in [ eq : wion ], the switched linear first order continuous - time dynamic models for a particular load are assumed to be of the form to model the power demand of a load .same goes to the off signal model but different time constants and are used to model respectively the on / off dynamic switching of the load .to achieve the optimal switching times of the binary switching signals for each load discretizing the power demand and the optimal switching signal was performed at a time sampling where is the sampling time and is an integer index . to simplify the integer math, we assume that both the switching times and the minimum on / off duration times are all multiple of the sampling time . with the imposed time discretization, the switching signal is held constant between subsequent time samples and and .a zero order hold ( zoh ) discrete - time equivalent of the continuous - time models given earlier in [ eq : odeon ] was used to achieve the computation of , and it is given by for on switching of the load .the coefficients and in the first order zoh discrete - time equivalent models are fully determined by the time constants , , static load demand and the chosen sampling time . defining the optimization that allows the computation of optimal discrete - time switching signals for the power demand of loads . defining a power tracking error it is clear that computing optimal will involve a criterion function and possible constraints on and over a ( finite ) time horizon .choosing to be large , e.g. where is the complete optimization period , results in two major disadvantages.the first one is that the number of possible combinations of the discretized binary switching signal grows exponentially with the number of loads and the number of time steps .fortunately , this can be significantly reduced by the requirement of minimum on / off duration times for the loads.as mentioned before , this avoids unrealistic on / off chattering of the switch signal during load scheduling and significantly reduces the number of binary load combinations .the second disadvantage of choosing to be large requires the discrete - time power profile to be available over many time samples to plan for optimal load scheduling , which leads to increasingly suboptimal schedules due to increasing solar forecast errors . with the imposed time discretization given in [ eq : ni ] , and a finite prediction horizon ,the admissible on / off transition signal in [ eq : wion ] reduces to where now denotes the most recent discrete - time index at which the load was switched `` off '' .similarly reduces to where denotes the most recent discrete - time index at which the load was switched `` on '' . both signals in [ eq : wiondiscrete ] and in [ eq : wioffdiscrete ]form a set of binary values for admissible discrete - time switching signals defined by it is beneficial to note that the number of binary elements in the set is always much smaller than due to required minimum number of on / off samples for the loads .this results shows that constraints on the allowable load switching helps to alleviate the combinatorial problem , making an optimization with binary switching computationally feasible . as an example , consider the case of loads over a power prediction horizon of samples . without any requirements on minimum number of on / off samples one would have to evaluate possible combinations of the load switching signal .starting at a binary combination with all loads off , e.g ] , where the first load that is switched on is required to stay on over the prediction horizon .it is clear from the above illustrations that the number of admissible binary combinations of loads over a prediction horizon of points is in general much smaller than , making the optimization with binary switching computationally feasible for real - time operation .the dynamic load scheduling optimization problem is formulated as a moving horizon optimization problem by following the power tracking error defined in [ eq : powererror ] where + + with the admissible set defined in [ eq : wset ] . adopting the ideas from mpc ,the dimensional optimal switching signal over the optimization horizon and the loads is selected by the evaluation of the criterion function as a function of the power tracking error ( in the future ) at . once the optimal switching signal , is computed , the optimal signal is applied to the loads _ only _ at the time instant , after which the time index is incremented and the optimization in [ eq : mpc ] is recomputed over the moving time horizon . as the admissible set defined in [ eq : wset ] has a finite and countable number of binary combinations for the switching signal ,the optimal value for is computed simply by a finite number of evaluation of the criterion function .hence , no ( gradient ) based optimization is used to compute the final value for .possible candidate functions may include a least squares criterion or may include a barrier function to enforce a positive constraint . such constraints may be required to guarantee that the load demand is always smaller then the ( predicted ) power profile in [ eq : powererror ] . in this paperwe use the quadratic function with a barrier function in [ eq : optcrit ] to perform tracking of predicted solar power curves by dynamic load switching .this algorithm can be implemented for any kind of standalone system ( wind , solar or even hybrid ) with a forecast tool providing input data . here, we present a standalone solar system powering 3 units of normalized sizes rated at 60% , 26% and 12% of full solar power . furthermore , every load has different dynamics for on / off switching modeled by the first order time constants and similar to the model given in ( [ eq : odeon ] ) .the first order time constants and are dependent on the size of the load .as illustrated in figure 3 in , the proposed scheduling approach schedules the on / off status of the three different loads in order to capture as much solar energy as possible , _i.e. _ to decrease the unutilized ( lost ) energy. the algorithm is tested against a clear sky model predicted day as well as the real pv forecast recorded on september 09 , 2014 by ucsd sky imager .the solar production from the clear sky model is presented in figure [ sim ] . as determined in , when short - term variability is small and deterministic ,a smooth scheduling for loads can be achieved , as observed ( figure [ sim ] ) .it can be seen that the scheduling algorithm emphasizes that turning on the largest unit is the main priority .lost energy is minimized by combining all three loads through out the day .seconds and for three loads .http://solar.ucsd.edu/c/wp-content/uploads/2015/11/bellcurvesim1.gif[animation can be found here . ] ] table [ forecastscenarios ] compares the efficiency of the system , defined as the percentage of solar power the loads capture , under the different forecast scenarios .a diminishing returns effect is observed for different forecast horizon and switching time . as observed for , the efficiency of the system increases as decreases . for , increases with , but remains constant for . from tableii we conclude that increasing is computationally costly and provides diminishing returns for greater than 8 times the switch time ..loads characteristics ( seconds ) [ cols="<,<,<,<,<,<",options="header " , ]solar forecasting is critical for load scheduling in standalone pv systems due variability of solar resources . ucsan diego has developed a ground based sky imager to detect clouds , cloud velocity , and forecast the advection of cloud shadows on the ground over the coming 10 to 20 minutes .the imager is composed of an upward facing camera coupled with a fisheye lens to capture a large area of o(10 km2 ) .the forecasting algorithm uses projected cloud locations coupled with a clear sky index model to predict global horizontal irradiance ( ghi ) over the captured domain .this is referred to as `` advection forecast '' .a commonly implemented forecasting method is to assume that the clear sky index _kt _ at time _t _ remains constant for the short period of the forecast _t+n _ , with irradiance increasing at the clear sky value multiplied by _ kt_. this method is called `` persistence '' forecasting. any more complex forecasting technique can be benchmarked against these persistence forecasts .persistence and advection forecast methods are associated with erroneous predictions .persistence errors are due to the deviation from the initial cloud state , which generally increases in time .advection errors arise mainly from inaccuracy of cloud detection and mis - prediction of cloud formation , evaporation , or deformation .for this reason , the accuracy of the ucsd solar forecasting algorithm against perfect , and persistence conditions are compared , and their effects on scheduling algorithms is classified .the aggregate forecast errors for the forecast for this day are : , , , computed by .figure [ compare ] illustrates the results of the mpc scheduling algorithm under the three different forecast scenarios . by design the scheduling under the perfect forecastis error free .however , under persistence and advection forecast predictions we can see several areas where the load demand is greater than the available solar energy . + an overprediction can cause loads to activate during a period where there is not enough solar energy to meet the demand ( power exceedence or pe ) ; an underprediction can prevent loads from being scheduled even though energy would have been available resulting in lost energy .two metrics are considered to evaluate the accuracy of the control algorithms .the first metric is the number of time steps with pe while the second one is called energy exceedence ( ee ) and is equal to integration of pe at the consecutive violated time steps .the ee provides a reasonable estimate for the size of an energy storage system that would avoid pe .+ figure [ compare ] depicts that three pe situations happen for the persistence forecast scenario between 9:00 and 11:00 . at 9:10 ,the intermediate sized unit ( unit 2 ) is in pe for approximately 10 minutes . at 10:20 ,the smallest unit ( unit3 ) is in a shallow pe in two distinct periods over 10 minutes .finally at 10:45 , we see the largest unit is in pe for 5 minutes . referring to the advective forecast we the same exceedance scenarios at 10:20 and 10:45 , but the pe at 9:10 is avoided .+ figure [ overreach ] shows the total ee for the day , the maximum energy of individual consecutive pe events , and the total number of violations for the day as a function of _we see that the persistence forecast leads to higher exceedance and more violations for all _ k_. increasing _ k _ in general also leads to an increase in exceedance and violations .if we evaluated solar forecast or control algorithms solely based on pe , then a forecast that is biased small ( or even always zero ) would improve the results as less and/or smaller loads would be scheduled . therefore a different metric such as efficiency is important to observe .figure [ efficiency ] shows the efficiency under each scenario as a function of _ k_. it is observed that the perfect forecast has the highest efficiency . for low _k _ , the advective forecast captures more energy , where the opposite is true for high k. + in both figures [ overreach ] and [ efficiency ] , there is non - monotonic behavior associated with increasing _k _ , which is not expected . for this reason, the same analysis was run with three units of equal size ( _ _ ) .this is plotted in figure [ efficiency ] . for this case ,the efficiency decreases monotonically with _k _ ( and is much lower than the non - equal case ) , and the advective forecast always captures more energy than the persistence forecast . we can conclude that the nonlinear behavior seen in non - equal units across increasing _k _ is due to the dynamics associated with starting and shutting down the loads .+ energy storage systems such as batteries could help overcome ee episodes as well as increase the efficiency .the excess of energy that is not captured by the loads ( figures [ compare],[efficiency ] ) could be stored and used to power loads during ee events . for battery sizing one could consider match the energy capacity of the battery to the energy required for maximum exceedance of the largest unit over its minimum required on - time .however , figure [ overreach ] demonstrates that the maximum individual ee for all cases is a small percentage the power of even the smallest unit .reducing the size of the battery reduces the up front capital cost associated with the system . a more sophisticated method for batterysizing would be to probabilistically determine the maximum individual ee case for several years of historical forecast data and size accordingly , which will be the focus of future work .in this paper , a mpc model was developed to compute the optimal binary control signal by determining the on / off combinations and timing of a set of distinct electric dynamic loads scheduling .the mpc load scheduling algorithm was tested using different forecasting techniques to assess the effects of input inaccuracy .the algorithm worked optimally under a perfect forecast ( as designed ) , but created errors due to forecasting error .the energy captured by the loads decreased for increasing _k _ in a non - monotonic manner for optimally sized unequal units . however , for the same scenario with equally ( but not optimal ) sized units , efficiency decreased monotonically with increasing _k _ for all forecast scenarios .the advection forecast model created fewer errors and gave less total exceedance and number of violations independent of the _ k _ , as compared to persistence . for a stand alone system these errorscould be mitigated with storage capacity to provide the power to the load to ride through the exceedance period .a battery sizing method is discussed as the topic of future work .wan , y. h. `` long - term wind power variability '' .nrel / tp-5500 - 53637 .n.p . , jan .2012 . web .makarov , yuri v. , clyde loutan , jian ma , and phillip de mello .`` operational impacts of wind generation on california power systems . '' power systems , ieee transactions on 24 , no. 2 ( 2009 ) : 1039 - 1050 .halamay , douglas , ted ka brreserve requirement impacts of large - scale integration of wind , solar , and ocean wave power generationekken , asher simmons , and shaun mcarthur . `` .'' sustainable energy , ieee transactions on 2 , no . 3 ( 2011 ): 321 - 328 .mathieu , johanna l. , maryam kamgarpour , john lygeros , goran andersson , and duncan s. callaway . ``arbitraging intraday wholesale energy market prices with aggregations of thermostatic loads . ''power systems , ieee transactions on 30 , no . 2 ( 2015 ) : 763 - 772 .makarov , yuri v. , pavel v. etingov , jian ma , zhenyu huang , and kamesh subbarao .`` incorporating uncertainty of wind power generation forecast into power system operation , dispatch , and unit commitment procedures . ''sustainable energy , ieee transactions on 2 , no. 4 ( 2011 ) : 433 - 442 .lin , yashen , prabir barooah , and j. mathieu .`` ancillary services to the grid from commercial buildings through demand scheduling and control . ''t. ferhatbe , g. govi zucker , p. palensky , model based predictive control for a solar - thermal system . in africon , 2011 ( pp. 1 - 6 ) .ieee , 2011 .agnetis , alessandro , gianluca de pascale , paolo detti , and antonio vicino .`` load scheduling for household energy consumption optimization . '' smart grid , ieee transactions on 4 , no . 4 ( 2013 ) : 2364 - 2373 .kim , byung - gook , shaolei ren , mihaela van der schaar , and jang - won lee . `` bidirectional energy trading and residential load scheduling with electric vehicles in the smart grid . '' selected areas in communications , ieee journal on 31 , no . 7( 2013 ) : 1219 - 1234 .morari , manfred , and jay h. lee .`` model predictive control : past , present and future . ''computers & chemical engineering 23 , no . 4 ( 1999 )mayne , david q. , james b. rawlings , christopher v. rao , and pierre om scokaert .`` constrained model predictive control : stability and optimality . ''automatica 36 , no . 6 ( 2000 ) : 789 - 814 .wang , tao , haresh kamath , and steve willard .`` control and optimization of grid - tied photovoltaic storage systems using model predictive control . ''smart grid , ieee transactions on 5 , no. 2 ( 2014 ) : 1010 - 1017 .abdulelah h habib , vahraz zamani , and jan kleissl .`` solar desalination system model for sizing of photovoltaic reverse osmosis ( pvro ) . '' in asme 2015 power conference , pp .american society of mechanical engineers , 2015 .abdulelah h habib , vahraz zamani , raymond a. de callafon , jan kleissl , `` sizing of photovoltaic reverse osmosis for solar desalination based on historical data in coastal california '' , international desalination association world congress , 2015 .yang , handa , ben kurtz , dung nguyen , bryan urquhart , chi wai chow , mohamed ghonima , and jan kleissl .`` solar irradiance forecasting using a ground - based sky imager developed at uc san diego . '' solar energy 103 ( 2014 ) : 502 - 524 . | this paper presents and evaluates the performance of an optimal scheduling algorithm that selects the on / off combinations and timing of a finite set of dynamic electric loads on the basis of short term predictions of the power delivery from a photovoltaic source . in the algorithm for optimal scheduling , each load is modeled with a dynamic power profile that may be different for on and off switching . optimal scheduling is achieved by the evaluation of a user - specified criterion function with possible power constraints . the scheduling algorithm exploits the use of a moving finite time horizon and the resulting finite number of scheduling combinations to achieve real - time computation of the optimal timing and switching of loads . the moving time horizon in the proposed optimal scheduling algorithm provides an opportunity to use short term ( time moving ) predictions of solar power based on advection of clouds detected in sky images . advection , persistence , and perfect forecast scenarios are used as input to the load scheduling algorithm to elucidate the effect of forecast errors on mis - scheduling . the advection forecast creates less events where the load demand is greater than the available solar energy , as compared to persistence . increasing the decision horizon leads to increasing error and decreased efficiency of the system , measured as the amount of power consumed by the aggregate loads normalized by total solar power . for a standalone system with a real forecast , energy reserves are necessary to provide the excess energy required by mis - scheduled loads . a method for battery sizing is proposed for future work . |
the next generation cherenkov telescope array ( cta ) will be constituted of three types of imaging atmospheric cherenkov telescopes : small- , medium- and large - sized telescopes .the array will be dedicated to the observation of the high energy gamma - ray sky with unprecedented sensitivity over a broad range of energies ( 0.01tev ) .this instrument will enable astronomers and astro - particle physicists to refine models of gamma - ray sources and underlying non - thermal mechanism at work , to study the origin and the composition of the cosmic rays up to the knee region , question the nature of the dark matter , etc .these kind of telescopes detect cherenkov light emitted along developing showers in the atmosphere nearly uniformly illuminating the ground over an area of about 50000m .shower maxima of a gamma - ray induced electromagnetic shower occur at altitudes comprised between km and km , for a primary energy ranging from to .the photon density on the ground depends on the energy of the gamma ray , its incidence direction and the distance from the resulting shower axis .the displacement of the image centroid in the telescope camera is a function of the angular distance of the shower core , defined by shower impact distance and altitude of the shower and is typically about 1.5 for an energy of 1tev and an impact parameter of 150 m , and 3 to 3.5 for energies around 100tev with an impact parameter of 300 m .as very high energy showers penetrate deeply in the atmosphere and generate a large amount of light , they can be observed at a relatively large distance from the main light pool already with relatively small reflectors translating into the necessity of a deployment of telescopes with large field - of - view for the exploration of very high energy showers . in general , the required telescope field - of view in order to record contained shower images ranges between 3 and 10 depending on the energy range of interest .a promising design for wide field observations is the davies - cotton telescope .its advantage is a point - spread function enabling a larger field - of - view than a parabolic design .the schwarzschild - coud design ( e.g. ) is not considered further , as it presents technical and cost challenges compared to a conservative cta proposal using a prime optics design ( see also the comments in section [ sct : winstoncone ] ) .it is demonstrated that once the field - of - view of the camera pixels and of the whole instrument together with the photo - sensor technology is fixed by means of physics arguments , only one parameter is left free .this parameter can be fixed as well , relating the worst optical resolution in the field - of - view , i.e.at the edge of the field - of - view , with the size of a pixel . having a reasonable cost model at hand, even the most cost - efficient single telescope or telescope design operating in an array can be derived . knowing the anticipated angular size of a pixel from physics constraints , a requirement on the point - spread function of the telescope can be determined .if the point - spread function is identified by the root - mean - square of the light distribution , requiring a pixel diameter four times the root - mean - square ensures that most of the light from a point source at infinity is concentrated in a single pixel at the edge of the field - of - view .if simulations show that such a containment is not necessary in terms of angular resolution of the shower origin or background suppression , also a smaller value like twice the root - mean - square can be considered , c.f .if this requirement is not optimized or not met , either the physics outcome is worsened or the camera has more pixels than necessary and will not be cost - efficient .the cta array layout will consist of large size telescopes ( lst , primary mirror diameter m ) in the center , and successively surrounded with an increasing number of medium - sized ( mst , m ) and small - sized ( sst , m ) telescopes in order to instrument a ground surface area comprised between 4km and 10km .telescope spacings , sizes and field - of - views reflect the energy range to be explored by a certain type of telescope. the lst sub - array will be primarily focusing on the observation of the high energy gamma - ray sky with great precision below about 100gev , the inter - telescope spacing will be small , about 60 m and the single telescope field - of - view limited to about 34 .the sst sub - array will conversely be optimized for multi - tev observations , up to around 100tev allowing for large spacing of up to 300m500 m with relatively modest reflector size .the camera pixel field - of - view of these different telescope types will span between about 0.080.1 ( lst ) up to 0.20.3 ( sst ) , linked to the different shower intensity with its intrinsic fluctuations and the concurrent necessity of keeping the number of camera pixels reasonably low .consequently , this study will focus on reflector diameters in the range of a meter to about 30 m and on pixel field - of - views in the range between .08 and .3 . and is extended to off - axis angles of the incoming light up to more than 5 . in the following ,first a description is derived for the relations between the existing design parameters of a cherenkov telescopes , e.g.focal length and reflector diameter .then , by including a semi - analytical treatment for the optical quality of a generalized davies - cotton reflector , this description becomes applicable for the design of real telescopes . at the end , the influence of the variation of the parameters on the optimized design is discussed .the light collection is one of the most important parameter of a cherenkov telescope and can be improved by , e.g. , an increase of the photo - detection efficiency of the photo - sensors or a rescaling of the system , i.e. , a corresponding rescaling of the reflective area and photo - sensor size . as current technology ( photo - multiplier tubes , hybrid photo - detectors or silicon photo - multipliers ) only offer a limited choice of photo - sensor sizes , the cherenkov telescope design parameter phase space is reduced .given a particular photo - sensor type , the addition of a light concentrator in front of the photo - sensor is the only way to increase the light collection efficiency . in the following, it will be shown that the relationship between the characteristic parameters of an optimal telescope design ( pixel field - of - view and linear size , reflector diameter and focal distance ) is fully constrained , once the technological requirement ( photo - sensor size and the light - guide material ) and the physics requirements ( the pixel angular size and the field - of - view ) are frozen .this result is obtained by imposing restrictions on the value of the point - spread function at the edge of the field - of - view in order to keep adequate image quality and thus analysis potential .the theorem of liouville states that the maximum concentration theoretically achievable is defined by maintaining the phase space , i.e.the product of solid angle , defined by the incoming light rays directions , light ray momentum squared and the surface area crossed by the light rays .the theorem of liouville is applicable to the case of a winston cone with entrance area placed in front of a photo - detector of area , corresponding to the exit area of the cone . is defined provided , the solid angle defined by the light rays entering the cone and the solid angle defined by the light rays leaving the cone .winston has shown in that the maximum concentration factor for a rotationally symmetric light concentrator is where and denote the refractive index of the media in front of and inside the optical system with in air . corresponds to the maximum angle at which a light ray enters the system related to the solid angle . if the system is not axisymmetric or the angular acceptance of the photo detector is smaller than ( as assumed in eq . [eqn : concentration ] ) , has to be adapted accordingly . besides the increase of the light collection area ,the use of cones enables a partial screening of the night - sky light pollution corresponding to being larger than the angle of light rays coming from the edge of the reflector .simulations and recent concentration efficiency measurements of solid cones ( .4 ) designed for the fact camera demonstrated that their shape is nearly ideal , that the concentration factor reaches a value close to and the geometric loss is only of the order of a few percent excluding absorption loss . in the case of hollow cones ,fresnel reflection losses have to be considered at the surface of the photo - sensor .if the camera is sealed with a protective window , which is usually the case , also losses at the window surface need to be taken into account . by choosing a material for the cones and the protective window with a similar refractive index than the material of the photo - sensor light entrance , these losses can be omitted .combined with the almost perfect reflectivity of solid cones due to total reflection ( limited only by the surface roughness ) , solid cone usually outperform hollow cone .the concentration factor achieved with the winston cones is fundamentally similar to the size reduction of the focal plane in a schwarzschild - coud design and linked to the conservation of the space - momentum phase space according to the liouville theorem : the conversion of the spatial into momentum phase space , by means of winston cones or secondary optics . while cones reduce the acceptance of the incoming light rays from a large area at the entrance to a large angular acceptance and small area at the cone exit , the secondary mirror optics leads to a similar spatial compression and angular widening of the light rays at the photo - sensor and thus a reduced plate - scale . while cones are non - imaging devices , the secondary optics is imaging .hence , in the schwarzschild - coud design , it is possible to attain excellent optical resolution , in terms of cherenkov telescope requirements , with a field - of - view as large as 15 .both designs enable compression of the photo sensitive area by factors larger than ten w.r.t .their primary optics design . in the case of a cherenkov telescope ,the light entering the cone comes from a reflector visible under a maximum angle defined from the focal plane center ] .the opening angle of the light at the entry of the cone is therefore well defined by the properties of the optical system , i.e.by the diameter of the reflector and the focal length , . combining this with eq .[ eqn : concentration ] yields for instance , taking the fact values , and , we obtain the theoretically maximal achievable concentration factor , i.e.the linear size of the entrance area can ideally be larger than four times the linear photo - sensor size .in addition , the optical system defines the zoom factor or plate - scale , i.e.the field - of - view corresponding to a physical area in the focal plane .the correspondence between the angular size and the linear size on the focal plane is or in the limit of small , cameras in cherenkov telescopes are pixelized due to the use of photo - detectors . to increase the light collection efficiency further , and to maintain symmetry ,these pixels are usually aligned on a hexagonal grid , i.e.in closed package geometry . in recent years, magic has exploited the photon arrival time extracted from the measured pulse and demonstrated significant improvements in the sensitivity .the technique , taking into account the change of the arrival time between neighboring pixels , performs best , if all neighbors are at an identical distance from the central pixel .consequently , the ideal shape of a pixel is hexagonal .the distance on the camera surface is the distance between two parallel sides of a hexagon , its area is combining with eq .[ eqn : baseformula ] , the plate - scale formula eq .[ eqn : platescale ] and eq .[ eqp : areahexagon ] , is obtained , which translates the close relationship between the pixel field - of - view , the focal length and the reflector diameter , once the technological parameters fixed : photo detector size and light concentrator material . defining a constant related to these properties and rewriting eq .[ eqn : derivedformula ] ( for ) as it is immediately apparent that the focal length of the system is a direct consequence of the pixel field - of - view and the reflector diameter , if the properties of the photon detector and the material of the cones are known . for typical cherenkov telescopes , is between unity and two .below unity the resolution becomes too coarse and above two , not only the number of pixels and hence the price of a camera becomes too high , but also the camera holding structure becomes mechanically complex and hence disproportionally expensive .this constraint on applied to eq .[ eqn : focallength ] yields precisely the choice of and defines the optical quality of a mirror system . at the same time, the size of a single pixel defines a natural constraint on the optical quality of a system , i.e. , should be chosen such that the point - spread function at the edge of the camera is within a limit well defined by the pixel s field - of - view .the light collection area is important for a cherenkov telescope and typical reflector sizes range from a few to ten or twenty meters .however , with the current technology , it is not possible to produce large mirrors with the requested quality at a reasonable cost .furthermore , optical systems compiled from a single mirror suffer large aberration effects at large off - axis angles , while a wide field - of - view is necessary for the observation of multi - tev showers up to large impact parameters , as well as for extended sources .therefore , segmented mirrors are in use .the layout providing the best optical quality for segments of identical focal length is the so - called davies - cotton layout , where the single spherical mirrors are located on a sphere with radius and focused to a point at . the relevant quantity which influences the on - axis and off - axis optical quality is the focal ratio .the optical quality improves with larger values .this scale invariance statement is true only as long as the optical quality of a single mirror can be neglected against the optical quality of the whole system , which is generally the case at the edge of the camera . to be able to constrain the optical point - spread function , a relation between the tessellation , the focal ratio and the resulting point - spread function is needed for a given maximum inclination angle of the light , i.e.at the edge of the field - of - view of the camera . in [ appendix : a ] , a formalism describing the point - spread function of an ideal davies - cotton reflector , i.e.a reflector with infinite tessellation is presented .the point - spread function is described by the root - mean - square of the light distribution . with the help of ray - tracing simulation , a reasonably good description of a real tessellated davies - cotton reflectoris derived from this analytical approach in [ appendix : b ] . including the correction factor which describes the deviation of the analytical approach from the simulations , a good description of the point - spread functionis obtained .it is shown that the point - spread function of an ideal davies - cotton can be expanded into a polynomial in .given the incident angle of the incoming ray and in the range between one and two , the polynomial is hypothesized into the more simpler form with coefficients and and the tessellation number as described in [ appendix : b ] .this parametrization is found to match the ray - tracing simulation without loss of precision .the coefficient can directly be deduced as the result of eqs .[ eq : realdc ] at and is derived by a fit .an example for the coefficients and for selected is shown in fig .[ fig : coefficients ] .as discussed in the introduction , it is required that the point - spread function is small compared to the pixel field - of - view at the edge of the field - of - view , so that the light of a point source is well contained in one pixel . defining the ratio between both , this requirement can be expressed as combined with eq .[ eqn : pointspreadfunction ] , the focal ratio can now be expressed as including this in eq .[ eqn : focallength ] yields with a correction factor defined as the absolute focal length can now be calculated using eq .[ eqn : focallength ] . to deduce the effective reflective area from eq .[ eqn : result ] , the shadow of the camera on the reflector has to be taken into account .to calculate the fraction of the reflector shadowed , the ratio of their areas is calculated .conversion of from an angle to a length yields approximately for small values of ( ) . expressing the focal length by eq .[ eqn : focallength2 ] the fraction of the camera shadow on the reflector is derived . if a real camera housing is significantly larger than the photo sensitive area itself , a correction factor should be included .now the effective light collection area of the optical system can be deduced as if real setups should be compared , like e.g.davies-cotton and schwarzschild - coud , also other sources of light - losses must be included , such as geometrical efficiency of the cones ( light - loss at the edge of the mirror ) , total mirror reflectivity , cone transmission or reflection losses , or photo detection efficiency . the relation given in eq .[ eqn : result ] includes several parameters which are subject to change . for simplicity , a standard setup has been defined to which altered setups are compared .silicon photo - detectors are a recent and very promising technology .therefore , a silicon photo - detector with a sensitive area of 36mm is chosen as a benchmark device .such devices are already commercially available with acceptable properties .although their sensitive area is still rather small compared to photo - multipliers , by increasing their light - collection with solid light concentrators , their light - collection area becomes reasonably large .such light - concentrators still maintain a reasonable weight and length in term of absorption .typical plexiglas materials have a refractive indices of the order of =1.4 and are used hereafter as a reference .as the light - collection area of a telescopes scales directly with the photo - sensitive area , the most obvious use of small photo sensors is a small telescope sensitive mostly to high energetic showers . at high energies ,the collection area of a telescope array is of prime importance due to rapidly decreasing fluxes . due to the bright light - pool of high energy showers , telescopes with relatively small reflectorscan be operated with large spacing of , e.g. , 400 m or more , c.f . . since such spacings demand a large camera field - of - view , a field - of - view of 9 diameteris chosen as a reference .a typical reflector for a cherenkov telescope with davies - cotton layout enables the manufacturing of a primary reflector tessellated into spherical mirror of identical focal lengths . from the scaling with the tessellation number as derived in [ eq : realdc ], it can be concluded that a layout with only three mirrors on the diagonal ( =3 ) has still a significantly worse optical quality than a reflector with five mirrors on the diagonal .although the point - spread function at the center of the camera is clearly dominated by the mirror size , the relative influence almost vanishes at higher off - axis angles . since the solution with =3 still shows a degradation of more than 10% compared to the solution with =5 even at the highest simulated off - axis angles , it is discarded .on the other hand , a further increase of the tessellation number ( individual mirror size over primary reflector diameter ) does not significantly improve the optical quality .consequently , choosing =5 is a good compromise and already close to the optimum achievable .comparable results were obtained in although using a third order approximation overestimating the optical quality .it must be noted that the simulation does not take the point - spread function of the individual mirrors nor any possible misalignment into account which must be added quadratically to the result .however , for the solutions discussed here this can be neglected , c.f .in general , alignment errors can be kept minimal and individual mirrors can be machined with a point - spread function small compared to the point - spread function at the edge of the camera . on average ,all davies - cotton designs with a reasonable have a root - mean - square of the light distribution in the tangential direction about two times larger than in the sagittal direction .ideally , the sagittal root - mean - square at the edge of the camera should fit a fourth of the pixel s field - of - view .this ensures that in the sagittal direction 95% of the light is contained within one pixel diameter and roughly 68% in the tangential direction .however , since the point - spread function is not gaussian and has long tails in tangential direction exact numbers for the light content might slightly differ . for convenience ,all following plots show dots for =\{1 , 1.25 , 1.5 , 1.75 , 2}. fig .[ fig : resulta ] shows the reflective area versus the pixel s field - of - view for comparison in the standard case with and without shadowing for different camera field - of - views .since the effect is comparably small and the mirror diameter is more expressive , in the following all plots show the mirror diameter rather than the reflective surface in the non - obstructed case .the effects of changing different input parameters w.r.t . to the previously described benchmark configurationare shown in fig .[ fig : comparison ] and discussed below .[ [ changing-the-camera-field-of-view-fig.figcomparison-top-plot ] ] changing the camera field - of - view ( fig .[ fig : comparison ] , top plot ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + changing the camera s field - of - view basically shifts the valid range along the line , i.e.the range corresponding to =[1.0 , 2.0 ] .that means that it is possible to build telescopes identical in optical quality , pixel s field - of - view and mirror diameter , but different field - of - view resulting simply in a different focal length of the system . in short : changing the field - of - view only changes the focal length .[ [ changing-the-optical-quality-fig.figcomparison-middle-plots ] ] changing the optical quality ( fig .[ fig : comparison ] , middle plots ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a change in the requirement on the optical quality directly influences , and therefore also shifts the range of reasonable almost linearly in ( left plot ) .changing the tessellation ( right plot ) is like changing the requirement on the optical quality .while the difference in optical quality between a davies - cotton layout with three mirrors on the diagonal and five mirrors is still significant , all other layouts give identical results within a few percent . in short : any tessellation number gives similar results .changing the requirement on the optical quality only changes the focal length .[ [ changing-the-photo-sensitive-area-fig.figcomparison-bottom-left-plot ] ] changing the photo sensitive area ( fig .[ fig : comparison ] , bottom left plot ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + since the constant is directly proportional to the size of the photon detector , the mirror area is directly proportional to the size of the photo sensor .if the size of the photon sensor is limited , a simple way to increase the field - of - view of a single pixel is to sum the signal of several photon counters to a single signal . to maintain a hexagonal , i.e.most symmetric layout , summingthe signal of three , four or seven photon sensors seems appropriate . in short : assuming an optimized light - concentrator , the photo sensor s physical size defines the scale of the system .[ [ changing-the-light-concentrator-fig.figcomparison-bottom-right-plot ] ] changing the light concentrator ( fig .[ fig : comparison ] , bottom right plot ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + another way to increase the reflective area is an increase of the refractive index of the light concentrator entering quadratically .using solid cones made from a plexiglas material with a typical refractive index in the order of 1.4 allows to increase the achievable reflective area by a factor of two compared to hollow cones .since the length of a typical light concentrator for an exit of 1 mm diameter is in the order of 3mm4 mm , weight and light - attenuation , which is dependent on the length of the material crossed , will define a natural limit on the sensor size for which a solid cone is still efficient . for comparison reasonsnot only solid ( =1.4 ) cones but also intentionally less efficient hollow cones ( =1.0 ) are shown .non - optimum hollow cones are typically used in current cherenkov telescopes , in which the sensitive area of standard photo - detectors ( pmts ) is not a limiting factor . in short : increasing the refractive index , quadratically increases the reflective area of the system .+ + + another interesting aspect for the final performance of a telescope is the collection of background photons from the diffuse night - sky background . here , eq .[ eqn : effectivearea ] leads to an interesting conclusion . since the rate of the night - sky background photons per channel scales with the effective reflective area and the solid angle corresponding to the field - of - view of the pixels , the night - sky background rate is proportional to yielding for the range of ] is .its width is given by , where is the speed of light .the interval is of the order 1.1ns for a 4 m class reflector ( , up to slightly less than 4.5ns for a 12 m reflector considering .this short time spread is not a problem for the observation of showers with a small size telescope as it is still small compared to the cherenkov light flash duration . for medium and large size telescopes ,a slightly different mirror arrangement should be chosen if time spread matters . by a mirror arrangement ,intermediate between a spherical ( davies - cotton ) and a parabolic design , the time spread can considerably be improved , maintaining the point - spread function almost completely .while the point - spread function is dominated by the majority of the mirrors , i.e. outermost mirrors , the time spread is dominated by the ones with the largest mirrors , i.e. innermost mirrors .consequently , moving the innermost mirrors closer to a parabola immediately improves the time - spread while the effect on the point - spread function is rather limited .ideally , mirrors on a parabola with adapted focal lengths are used , but might be a cost issue . with adapted focal lengths ,all mirrors are placed at correct focal distance , so that , a similar point - spread function than for the davies - cotton arrangement can be expected . [ [ remarks - about - cta ] ] remarks about cta + + + + + + + + + + + + + + + + + + recent results of fact show that a reflector in the order of 3.5 m diameter can give already reasonable physics results with current analysis and detector technology . therefore , a 4 m diameter reflector for sst is assumed .for physics reason , the field - of - view is supposed to be between 9 and 12 ( leading to a reasonable between 1.5 and 1.8 assuming an optical quality of 4 and a tessellation number of 5 ) . requiring a pixel field - of - view in the order of 0.26 , possible solutions could be solid cones with a 36mm g - apd or hollow cones with a 60mm g - apd , see also fig .[ fig : cta ] .the manufacturing of 60mm g - apds is under discussion with hamamatsu .a rough estimate shows that a solid cone for such a device would be about three times longer than for a 9mm as used in fact .considering the transmission loss of 10% in the fact cones , which is a very conservative estimate , such cones would have a loss in the order of 35% .since solid cones avoid the loss from fresnel reflection at the sealing surface and the g - apd surface , the real light loss would only be around 27% assuming that the hollow cone has a reflectivity of 100% which in reality is not true . keeping the pixel field - of - view constant ,the gain in reflective area corresponds to the refractive index of the cone material squared . in the case of a refractive index of typical poly(methyl methacrylate ) pmma of 1.4 this is a gain of % reflective area , which outperforms the transmission loss significantly . assuming that the manufacturing of a 36mm g - apd would be as easy as of a 60mm g - apd, one can compare a solution with a 36mm g - apd and a solid cone and a 60mm hollow cone ( assuming perfect reflectivity ) . in this case, the transmission loss of the solid cone is around 13% compared to 8% fresnel loss for the hollow cone . on the other hand ,the solution with the hollow cone yields a 15% smaller reflective area ( same pixel field - of - view ) or 27% more pixels ( same reflective area ) .assuming further that the price of the camera scales with the price of each channels , a reduction of the number of channels by almost 30% reduces the costs for the camera significantly .since the costs are also dominated by the price for the photo - detectors , and the price of g - apds , in the first order , scales with the sensitive area , it can be estimated that the price for the 36mm g - apds would be almost a factor of two lower than for the larger ones . in figure[ fig : cta ] possibly solutions for mst and lst designs are shown using g - apds and solid cones . on both casesit is convenient to sum at least three , or even seven , pixels into one readout channel to keep the ratio low for construction reasons ., a pixel size four times the point - spread function and solid cones with . for a given and camera field - of - view, the corresponding reflector diameter and the pixel field - of - view can be read .[ fig : conclusion],scaledwidth=48.0% ] the davies - cotton design with its simplicity as compared to non validated dual optic systems is assuredly a good option for a wide field - of - view , up to 1012 , high energy cherenkov telescope . with this study , it is possible to scan a wide phase space of the design of cherenkov telescopes or telescope arrays .this was achieved by a description of the optical performance of davies - cotton reflectors and introduction of the effect of light - concentrators . in particular, this study provides an analytical description of the optical performance of a tessellated davies - cotton reflector precise enough to enable performance studies without the need for dedicated simulations . by including the effect of the light - collector into the system of equations , the available phase space of design parametersis reduced to a single parameter , once the photon detector has been chosen and either the pixel field - of - view or the camera field - of - view has been fixed by physics constraints . while the choice of photo sensor is usually defined by the availability on the market ,constraints on the camera field - of - view are a result of the physics targets .if these two parameters are fixed , the whole available phase space of possible solution can now be scanned by changing a single input parameter .it can , for example , be convenient to scan a reasonable range of the focal ratio and derive all other parameters accordingly . from the result ,the most cost efficient solution , or the one performing best in sense of physics targets can be chosen . for the cherenkov telescope array ,several design options were presented .it could be shown that for the small size telescope , considering a camera field - of - view of 9 to 12 , a four meter reflector is enough if 36mm senors are topped with solid cones to achieve a pixel field - of - view in the order or 0.25 to 0.3 at reasonable .an alternative solution are hollow cones with correspondingly larger sensor area , which is disfavored because of the costs dominated by the sensor . an example plot which easily allows to determine reasonable options from the available phase spaceis shown in fig .[ fig : conclusion ] .the reflector diameter can easily be re - scaled linearly with the photo sensor size and the refractive index of the cone material . for the medium size and large size telescopes , the most reasonable solution using small sensors would be the summation of three and seven , respectively . equipped with different sum - stages, these modules could be applied in any telescopes .larger silicon based sensors , expected soon on the marked , would allow a single - channel / single - sensor solution .using several small sensors in one channel has the advantage that the application of solid cones is possible in terms of weight and transmission and costs for photo sensors can be kept low due to their at least two times higher compression ratio .the davies - cotton design is known to be promising for wide field prime - focus telescopes and was studied earlier analytically and through simulations .however , the parametrizations are moderately accurate and non existing for tessellated reflectors . here , parametrizations are provided , accurate at the percent level up to 12 field - of - view , for the ideal ( non - constructable ) davies - cotton telescope and accurate to a few percent for a realistic davies - cotton telescope with arbitrary tessellation of the reflector . the major issue of the design of a telescope is the reflector and its optical performance .since design parameters like the field - of - view of a single pixel or the field - of - view of the whole camera are closely related to the reflectors optical performance , it is important to understand the relation between the reflector design and its performance .unfortunately , neither spherical nor parabolic mirrors can provide both , good optical point spread function for on - axis and inclined rays , at the same time , because the distance between any point on the mirror surface to the focal point does not match the local focal length defined by the local radius of curvature .furthermore , in the case of a spherical mirror , also the shape of the mirror surface is not ideal compared to a parabolic mirror .the parabolic shape ensures that parallel rays from infinity are well focused into a single point ( due to the definition of a parabolic surface ) while in the spherical case this is not the case .that means that in both cases rays hitting the mirror far off its center have their focal point not at the focal plane . in the case of a spherical mirrorthey also miss the focal point ( _ aberration _ ) .consequently , the ideal mirror would be a combination of two properties : a mirror surface which is shaped such that it has the right focal distance at any point , but at the same time any point is correctly oriented , so that focal distance and direction are correct .since local normal vector and local curvature can not be disentangled such a mirror can only be a theoretical construction .tessellating the reflector into individual mirrors , this behavior can be approximated , as shown by davies and cotton , if the reflector is built from several spherical mirrors which are placed on a sphere around the focal point . in this case , the reflector can have the correct focal distance locally and , at the same time , the mirror elements can be oriented such that they correctly focus to the focal point .apart from an improved optical performance for inclined rays , the production of several small and identical mirrors is also much more cost efficient than the production of a single large mirror . since any optical system can always be linearly scaled , in the following a scale factor is chosen such that the reflector diameter corresponds to unity , which is identical to defining with being the focal length and the diameter of the mirror .the ideal davies - cotton reflector has a non constructable surface .its shape is spherical with radius of curvature , but its normal vectors are defined to intercept at location .formally , the surface equation and the surface normal vectors are to have the root - mean - square of the projection of reflected rays on the focal plane along and coinciding with tangential and sagittal resolutions , a rotation around is performed without loss of generality . an incoming ray with vector will therefore be reflected on the surface in the direction and intercept the ( non curved ) focal plane at , generally yielding for the ideal davies - cotton this takes the explicit form it is straightforward to numerically calculate the image centroid and the resolution of such a telescope and to estimate the contribution of various terms to the resolution with a taylor development of and in terms of , and .the development of terms of the form , with and was found to be sufficient for a percent precision in the resolution parameters .the tangential and sagittal barycenter in the focal plane ( the image centroid ) for a uniform beam on the primary surface are given by and the corresponding resolution in term of root - mean - square : the upper integration bound in originates from the fact that the optical system was scaled to meet a reflector mirror of =1 , hence = .davies - cotton , only leading terms and are retained , at and a maximum off - axis angle of the incoming rays of =5 , terms such that and .as we apply the same conditions to spherical prime - focus design , less terms are present at higher order , i.e.less spherical aberration .to mirror the result for the ideal davies - cotton , several non leading terms are added giving a consistent picture for both developments as shown in table [ tab : coeff ] . in and , order developments for the davies - cotton and sherical mirror , respectively , have been discussed .the obtained coefficients are repeated here for completeness in table [ tab : coeff ] .both solutions show up to 20% fractional error , e.g.at . in the above considerations ,the shadow of a possible detector in the focal plane has been neglected . by changing the lower bound from to in expressions [ expr9 ] , obscurationcan easily be quantified .[ fig : dctelreso - obscur ] shows that obscuration degrades the resolution parameters by about 1.5% at up to about 6% at , 5 off - axis . and of the exact results , i.e.numerically calculated , with ( dashed ) and without ( solid ) obscuration .inset shows the ratio of both for =\{1,1.4,2 } ( solid , dashed , dotted ) ., scaledwidth=47.0% ] a realistic implementation of the non - constructable davies - cotton telescope consists in introducing a reflector made of multiple individual spherical mirrors .the tessellation number is defined as the the number of mirrors in the diagonal . in the limit =, it is identical to the ideal davies - cotton design . in practice, is a number .the effective parametrization is presented as a correction to the limited taylor development derived earlier .the correction is implemented through ray - tracing simulation performed with the mars software ( described in ) , which do fully reproduce the results obtained earlier in the case of a spherical and ideal davies - cotton reflectors .although an ideal davies - cotton reflector can not be build in reality , it can be simulated easily .simulations enable the use of arbitrary tessellation , since analytical solution being not very well suited for this task .* individual mirrors are hexagonal . for symmetry reasonseach hexagon is rotated by 15 against the x-/y - axis * the mirrors are fixed on a hexagonal grid in the /-plane with spacing * the diameter of the individual mirrors is defined as , with being the total number of mirrors in the system * their center is located on a sphere around the focal point ( this corresponds to for the ideal davies - cotton ) * the focal length of each mirror is equal to the radius of the sphere , and therefore equal to the focal length of the system * the overall shape of the reflector is also hexagon like * the tessellation number is the number of mirrors in the diagonal * each mirror is oriented to a virtual point in 2 ( this corresponds to for the ideal davies - cotton ) * the small effect of obscuration by the focal instrumentation is neglected * the mirror surface is assumed to be ideal empirically , it could be found that introducing a dependence on the tessellation number , the formulas given in [ appendix : a ] for the spherical mirror and the ideal davies - cotton mirror could be unified . for this , a linear dependence at order in , and a quadratically at order for , has been introduced .additionally , an effective rescaling is needed to reach an accuracy about 5% in the whole simulated range .the root - mean - square of a tessellated davies - cotton can then be written as \frac{\phi^i}{f^j_{\rm eff}}\right)\ , , \nonumber\\ \delta\eta^2 & = & \frac{1}{2 ^ 4}\left(\sum_{j } \frac{s^\eta_{0,j}}{n^2 f^j_{\rm eff } } + \sum_{i>0,j } \left[\frac{s^\eta_{i , j}-d^\eta_{i , j}}{n}+d^\eta_{i , j}\right ] \frac{\phi^i}{f^j_{\rm eff}}\right)\,.\nonumber\\ & & \label{eq : realdc}\end{aligned}\ ] ] the coefficients and are the ones given in table [ tab : coeff ] for the spherical and the ideal davies - cotton , respectively . as defined by formula [ eq : realdc ] determined from fits to the simulated point - spread function .each point is the average of all correction factors obtained for the simulated range of .the error bar denotes the spread .the two curves are the tangential ( upper curve , solid ) and sagittal ( lower curve , dashed ) , respectively.,scaledwidth=48.0% ] describing the transition from a single spherical mirror to an ideal davies - cotton reflector . the rescaling factor be interpreted as the deviation of the shape from the ideal case .its value was determined by minimizing the residual , i.e. , between simulated point - spread function and approximated root - mean - square .the differences of the sagittal and tangential residual are minimized independently for each .its value is depicted in fig .[ fig : correction ] .the introduction of this scale factor effectively reduced the residual from a maximum of 12% to less than 5% for tessellation numberssmaller than 40 .[ fig : residual ] shows the distribution of the residuals for different tessellation numbers .note that for the case the simulated single mirror is of hexagonal shape while the analytical approximation describes a disc - like mirror . forcases the properties of the simulated reflector converge to the ideal davies - cotton . while the presented development was calculated for a disc shaped reflector , here , the simulated davies - cotten converge to an ideal hexagon .consequently , in both cases the rescaling factor is expected to be different from unity . in general, it is not expected to obtain a perfect match between the analytical approximation and the simulation , because simulations will always take into account effect which can not be easily described analytically , like rays lost between individual mirrors . from eqs .[ eq : realdc ] it is evident that for rays with small incident angles the point - spread function is dominated by the -order term which decreases fast with high tessellation number . at higher incident angles the point - spread function is dominated by higher order terms which only turn from the spherical to the ideal davies - cotton solution for increasing tessellation numbers .in general the dominating term for reasonable incident angles is the -order term .consequently , the point - spread function dramatically improves for but for changes become unimportant .hence , for practical purposes a single mirror and the case of can be excluded while for most practical purposes will already be enough .it is possible to describe the optical quality of a set of well defined davies - cotton reflectors quite well in a single analytical formula .even the real davies - cotton might be slightly different , e.g.different mirror or reflector shapes or obscuration by the focal plane instrumentation , this gives a very good estimate of the optical performance .stamatescu , v. , rowell , g. p. , denman , j. , _ et al ._ , 2011 , astropart .phys . , 34 , 886 .bernlhr , k. 2008 , astropart .phys . , 30 , 149 .schliesser , a. , and mirzoyan , r. , 2005 , astropart .phys . , 24 , 382 .actis , m. , agnetta , g. , aharonian , f. , _ et al ._ , 2011 , experimental astronomy , 32 , 193 .r. winston , j. c. minano , p. benitz , _ nonimaging optics _, 2004 , elsevier academic press .braun , i. , _ et al ._ , 2009 , in proc . of the international cosmic ray conferencehuber , b. , braun , i. , _ et al . _ , 2011 , in proc . of the international cosmic ray conference .anderhub , h. , backes , m. , biland , a. , _ et al ._ 2011 , nuclear instruments and methods in physics research a , 628 , 107 .aliu , e. , anderhub , h. , antonelli , l. a. , _ et al . _ , 2009 , astropart .phys . , 30 , 293 .anderhub , h. , backes , m. , biland , a. , _ et al ._ , 2012 , proc . of the international symposium on high - energy gamma - ray astronomy ( _ in press _ )mppc multi - pixel photon counters , 2008 , hamamatsu .chris r. benn and sara l. ellison , la palma night - sky brightness .a. k. konopelko , 1997 , towards a major atmospheric cerenkov detector v , 208 .davies , j. m. , and cotton , e. s. , 1957 , j. solar energy sci . and eng .vassiliev v. , fegan s. , brousseau p. , 2007, astropart .phys . , 28 , 10 .schliesser a. , mirzoyan r. , 2005 , astropart .phys . , 24 , 382 .bretz , t. and wagner , r. , 2003 , in proc . of the international cosmic ray conference , 5 , 2947 .bretz , t. , 2005 , high energy gamma - ray astronomy , 745 , 730 .bretz , t. and dorner , d. , 2008 , american institute of physics conference series , 1085 , 664 . | this paper discusses the construction of high - performance ground - based gamma - ray cherenkov telescopes with a davies - cotton reflector . for the design of such telescopes , usually physics constrains the field - of - view , while the photo - sensor size is defined by limited options . including the effect of light - concentrators in front of the photo sensor , it is demonstrated that these constraints are enough to mutually constrain all other design parameters . the dependability of the various design parameters naturally arises once a relationship between the value of the point - spread functions at the edge of the field - of - view and the pixel field - of - view is introduced . to be able to include this constraint into a system of equations , an analytical description for the point - spread function of a tessellated davies - cotton reflector is derived from taylor developments and ray - tracing simulations . including higher order terms renders the result precise on the percent level . + design curves are provided within the typical phase space of cherenkov telescopes . the impact of all design parameters on the overall design is discussed . allowing an immediate comparison of several options with identical physics performance allows the determination of the most cost efficient solution . emphasize is given on the possible application of solid light concentrators with their typically about two times better concentration allowing the use of small photo sensors such as geiger - mode avalanche photo diodes . this is discussed in more details in the context of possible design options for the cherenkov telescope array . in particular , a solution for a 60mm photo sensor with hollow cone is compared to a 36mm with solid cone . tev cherenkov astronomy , davies - cotton design parameters , photo - sensors , winston cone |
during the last decade , several significant model - free formulas , describing the asymptotic behavior of the implied volatility at extreme strikes , were found .we only mention here r. lee s moment formulas ( see ) , tail - wing formulas due to s. benaim and p. friz ( see , see also ) , asymptotic formulas with error estimates established by the author ( see ) , andhigher order formulas found by k. gao and r. lee ( see ) .we refer the interested reader to the book by the author for more information .the present work was inspired by the paper of s. de marco , c. hillairet , and a. jacquier .the authors of obtained interesting results concerning the asymptotic behavior of the implied volatility at small strikes in the case where the asset price distribution has an atom at zero ( see theorem 3.7 in ) .special examples of such models are the constant elasticity of variance model , jump - to - default models , and stochastic models described by processes stopped at the first hitting time of zero ( more information can be found in ) .it is not hard to see that the right - wing behavior of the implied volatility in models with and without atoms is similar .therefore , general model - free asymptotic formulas for the implied volatility at large strikes , discussed in chapter 9 of , can be used in stochastic asset price models with atoms .however , the left - wing behavior of the implied volatility in models with and without atoms is qualitatively different . this fact was noticed and explored in .it was also shown in that the general formula formulated in corollary 9.31 in , which describes the left - wing behavior of the implied volatility in terms of the put pricing function , holds for asset price models with atoms ( see formula ( [ e : a ] ) below ) .for such models , the above - mentioned formula provides only the leading term in the asymptotic expansion of the implied volatility and an error estimate .the authors of found a sharper asymptotic formula , characterizing the left - wing behavior of the implied volatility in models with atoms ( see formula ( [ e : one ] ) below ) .note that the impact of an atom at zero on the left - wing asymptotics of the implied volatility was not taken into account in section 9.9 of .this omission led to an incorrect description of the asymptotic behavior of the implied volatility at small strikes in the cev model ( see formula ( 11.22 ) in theorem 11.5 in ) .only the absolutely continuous part of the distribution of the asset price was taken into account in formula ( 11.22 ) mentioned above , while the influence of the atom at zero was ignored . in this paper, we establish new asymptotic formulas for the implied volatility at small strikes in models with atoms ( see formulas ( [ e : imvol2 ] ) , ( [ e : imvol222 ] ) , and ( [ e : imvol22 ] ) below ) .these formulas contain three explicit terms in the asymptotic expansion of the implied volatility and an error estimate .note that the asymptotic formula found in contains two terms and only an incomplete information about the third term is provided .moreover , there is a qualitative difference between the new formulas and the de marco - hillairet - jacquer formula . in the new formulas , we use the inverse function of a strike - dependent function , while the inverse function of the cumulative standard normal distribution function is employed in .it is shown numerically in section [ s : num ] of the present paper that formula ( [ e : imvol22 ] ) provides a significantly better approximation to the left wing of the implied volatility in the constant elasticity of variance model than the de marco - hillairet - jacquier formula .our next goal is to introduce several known objects , which will be used in the rest of the paper , and then formulate our main results .the asset price will be modeled by a non - negative martingale defined on a filtered probability space .the initial condition for the process is denoted by , and it is assumed that is a positive number .it is also assumed that the interest rate is equal to zero . in the sequel, the symbols and stand for the call and put pricing functions , associated with the price process .these functions are defined as follows : \quad\mbox{and}\quad p(t , k)=\mathbb{e}\left[(k - x_t)^{+}\right].\ ] ] in the previous formulas , is the strike price , and is the maturity . the implied volatility is the function , satisfying the following condition : the expression on the left - hand side of ( [ e : iv ] ) is the call pricing function in the black - scholes model with the volatility parameter equal to . the function is defined by where is the standard normal cumulative distribution function , that is , the function the functions and in ( [ e : cumul ] ) are defined by and [ r : r ] it will be assumed throughout the paper that for all and .moreover , we assume that for all and .the first of the previous restrictions guarantees that exists for all , while under the second restriction , the implied volatility exists for all ( more details can be found in section 9.1 of ) .[ r:11 ] the maturity will be fixed throughout the paper . to simplify notation, we will suppress the symbol in the function and in similar functions .[ r : navigate ] in the proofs of the results obtained in the present paper , we often assume that .it is easy to understand why this assumption does not restrict the generality .indeed , let us define a new stochastic process by , and denote the corresponding call pricing function and the implied volatility by and , respectively . then, it is not hard to see that .moreover , the same formula holds for the black - sholes call pricing function , and therefore since the process has as its initial condition , formula ( [ e : nownot ] ) allows us to navigate between asymptotic formulas for the implied volatility under the restriction and similar formulas in the general case .let , , and set .if for every , we have , then the function defined by is a call pricing function ( see ) .the function plays the role of a link between the left - wing and the right - wing asymptotics of the implied volatility ( see ) . now , suppose for some .let us fix such a maturity , and consider , , and as functions of the strike price .note that for models with atoms , the function , given by ( [ e : g ] ) , is not a call pricing function .indeed , the function does not satisfy the condition as .however , the function has many features of a call pricing function .for example , it is not hard to see that the black - scholes implied volatility exists for all ( see remark [ r : r ] ) , and , in addition , for all with .we also have where is a positive function such that the proof of ( [ e : r2 ] ) and ( [ e : m ] ) is simple .indeed , it follows from the definition of the put pricing function that ,\ ] ] where is the distribution of on the open half - line .hence , now it is clear that ( [ e : lim ] ) implies ( [ e : r2 ] ) .finally , the proof of the equality in ( [ e : r1 ] ) for asset price models with atoms is the same as the proof of lemma 9.23 in .[ r : distd ] we denote by and the cumulative distribution functions of the random variable on and , respectively .these functions are given by it is clear that we have already mentioned the asymptotic formula in corollary 9.31 in .this formula is as follows : \nonumber \\ &\quad+o\left(\left(\log\frac{k}{p(k)}\right)^{-\frac{1}{2}}\log\log\frac{k}{p(k)}\right ) \label{e : a}\end{aligned}\ ] ] as . using the mean value theorem and the formula for small values of ( the previous formula follows from ( [ e : g ] ) and ( [ e : r2 ] ) ) ,we obtain as .this means that in the presence of atoms , the general formula given in ( [ e : a ] ) provides only the leading term in the asymptotic expansion of the implied volatility near zero .the expression for the leading term given in ( [ e : zero ] ) can also be predicted from lee s moment formula ( see ) .indeed , for models with atoms , all the moments of negative order of the distribution of the asset price explode .we will next formulate the main result of , adapting it to our notation .suppose there exists such that as .then as . in ( [ e : one ] ), the symbol stands for the inverse function of the standard normal cumulative distribution function . in addition, the function in ( [ e : one ] ) satisfies a special estimate of order as ( see theorem 3.7 in for more details ) . note that the term in ( [ e : one ] ) is not really the third term in the asymptotic expansion of , because of an interplay between the expression in ( [ e : inter ] ) and the function as . in theorem [ t : corrf ] and corollary[ c : sims ] below , we provide asymptotic formulas for the implied volatility at small strikes with three terms and error estimates of order as .the main novelty in our approach is that instead of the function used in formula ( [ e : one ] ) , we employ a family of strike - dependent inverse functions , , where it is easy to see that for every , the function is strictly increasing on the interval ( differentiate ! ) , and moreover it follows from the reasoning above that the inverse function exists for all with is strictly increasing on the interval , and maps this interval onto the interval .[ r : notethat ] note that exists for all such that and therefore , exists for all with . on the other hand ,if , then we have to assume that condition ( [ e : mainr ] ) holds .according to remark [ r : notethat ] , is defined for all provided that .in addition , if , then is defined under the following restriction : note that , given , there exists such that ( [ e : acco1 ] ) holds for all and moreover .therefore , for all , we can solve the equations and by inverting the function .the next lemma is simple , and we omit the proof .[ l : analysis ] let , and suppose satisfies ( [ e : accord ] ). then the inequality holds if and only if the next two statements are the main results of the present paper . [t : corrf ] let .then the following asymptotic formula holds for the implied volatility in the asset price models such as above : ^ 2 \left(\log\frac{x_0}{k}\right)^{-\frac{1}{2 } } \nonumber \\ & \quad+o\left(\left(\log\frac{x_0}{k}\right)^{-\frac{3}{2}}\right ) \label{e : imvol2}\end{aligned}\ ] ] as .[ c : simson ] let , and suppose the random variable is such that as .then ^ 2 \left(\log\frac{x_0}{k}\right)^{-\frac{1}{2 } } \nonumber \\ & \quad+o\left(\left(\log\frac{x_0}{k}\right)^{-\frac{3}{2}}\right ) \label{e : imvol222}\end{aligned}\ ] ] as .[ c : sims ] let , and suppose the random variable satisfies condition ( [ e : ost ] ) .then ^ 2 \left(\log\frac{x_0}{k}\right)^{-\frac{1}{2 } } \nonumber \\ & \quad+o\left(\left(\log\frac{x_0}{k}\right)^{-\frac{3}{2}}\right ) \label{e : imvol22}\end{aligned}\ ] ] as .it is not hard to see that corollary [ c : sims ] follows from theorem [ t : corrf ] , remark [ r : distd ] , formulas ( [ e : r2 ] ) and ( [ e : lim ] ) , and the mean value theorem .[ r : vdrug ] it is interesting to notice that in formulas ( [ e : imvol2 ] ) and ( [ e : imvol22 ] ) , the term of order is absent .this happens because of certain cancellations , which occur when we combine formulas ( [ e : ex1 ] ) and ( [ e : imvol3 ] ) in the proof of theorem [ t : corrf ] .[ r : between ] the smile approximations provided in formulas ( [ e : imvol2 ] ) and ( [ e : imvol222 ] ) take into account the distribution of the asset price , while the approximations in formula ( [ e : imvol22 ] ) and in the de marco - hillairet - jacquier formula ( see ( [ e : one ] ) ) use only the mass of the atom at zero .[ r : vdrugs ] comparing the de marco - hillairet - jacquier formula ( [ e : one ] ) and our formula ( [ e : imvol22 ] ) , one notices two main differences .first of all , formula ( [ e : imvol22 ] ) contains the expression instead of the expression appearing in formula ( [ e : one ] ) . on the other hand ,the function in ( [ e : one ] ) satisfies as , while the error term in ( [ e : imvol22 ] ) is as . we will prove theorem [ t : corrf ] in section [ s : next ] , while in section [ s : appl ] we will derive the de marco - hillairet - jacquier formula from our theorem [ t : corrf ] .in addition , in section [ s : appl ] we give estimates for the difference at small strikes. section [ s : cevm ] deals with the left - wing behavior of the implied volatility in the cev model . finally , in the last section of the paper ( section [ s: num ] ) , we compare the performance of two formulas , providing approximations to the implied volatility at small strikes in the cev model : the de marco - hillairet - jacquier formula and the formula in corollary [ c : sims ] in the present paper .we have already mentioned that it suffices to prove the theorem in the case where . for every small number , set the following assertion provides two - sided estimates for the implied volatility .[ t : ochen ] let . then there exists such that for all . in ( [ e : wildi3 ] ) , the functions and are defined as follows : ^ 2 \nonumber \\ &\quad+(u_k)^{-1}(g(k))\sqrt{\left[(u_k)^{-1}(g(k))\right]^2 + 2\log k } \label{e : fina7}\end{aligned}\ ] ] and ^ 2 \nonumber \\ & \quad+(u_k)^{-1}(\widetilde{g}_{\varepsilon}(k ) ) \sqrt{\left[(u_k)^{-1}(\widetilde{g}_{\varepsilon}(k))\right]^2 + 2\log k}. \label{e : final2}\end{aligned}\ ] ] _ proof of theorem [ t : ochen ] .our first goal is to find two functions and satisfying the following conditions : for sufficiently large values of .the inequalities in ( [ e : fina2 ] ) will allow us to estimate the implied volatility for large strikes , and hence to characterize the left - wing behavior of the implied volatility ._ let be a real function growing slower than .the function will be chosen later .put then we have and therefore , our next goal is to estimate the last term in ( [ e : ro1 ] ) .we will use the following known inequalities : ^{-\frac{x^2}{2 } } \le\frac{1}{\sqrt{2\pi}}\int_x^{\infty}e^{-\frac{y^2}{2}}dy \le\frac{1}{\sqrt{2\pi}}\frac{1}{x}e^{-\frac{x^2}{2}}. \label{e : sestim1}\ ] ] the estimates in ( [ e : sestim1 ] ) follow from stronger inequalities formulated in , 7.1.13 . taking into account ( [ e : sestim1 ] ) , we see that and } \nonumber \\ & \quad\exp\left\{-\frac{\varphi(k)^2}{4(\log k+\varphi(k))}\right\}. \label{e : sestim3}\end{aligned}\ ] ] let us next suppose that the function has the following form : where is such that in ( [ e : constant ] ) , is a real number. then we have \nonumber \\ & \le\frac{3\phi(k)^2}{16(\log k)^{\frac{3}{2 } } } , \label{e : rur}\end{aligned}\ ] ] for all .it follows from ( [ e : sestim2 ] ) and the first inequality in ( [ e : rur ] ) that for all .note that as . to get a lower estimate for , we observe that } \nonumber \\ & \quad\exp\left\{-\frac{\varphi(k)^2}{4(\log k+\varphi(k))}\right\ } \nonumber \\ & \le\frac{1}{4\sqrt{\pi}}\frac{(\log k+\varphi(k))^{\frac{3}{2}}}{(\log k)^3 } = \frac{1}{4\sqrt{\pi}}\frac{\left(1+\frac{\phi(k)}{\sqrt{\log k}}\right)^{\frac{3}{2}}}{(\log k)^{\frac{3}{2 } } } \nonumber \\ & \le\frac{1}{4\sqrt{\pi}}\left[\frac{1}{(\log k)^{\frac{3}{2}}}+\frac{3\phi(k)}{2(\log k)^2}+\frac{3\phi(k)^2 } { 8(\log k)^{\frac{5}{2}}}\right],\quad k > k_1 .\label{e : coc}\end{aligned}\ ] ] in the proof of ( [ e : coc ] ) , we used the inequality it follows from ( [ e : sestim3 ] ) , ( [ e : phi ] ) , ( [ e : rur ] ) , and ( [ e : coc ] ) that ,\end{aligned}\ ] ] for all . therefore , condition ( [ e : constant ] ) implies that for every there exists a sufficiently large number such that for every , next , taking into account ( [ e : ro1 ] ) , ( [ e : cons ] ) , and the previous inequality , we see that the following statement holds .[ l : estime ] for every and all , where lemma [ l : estime ] provides estimates for the function , which differ by a quantity of the higher order of smallness than .let us next choose the function so that the function , given by the formula in ( [ e : fina ] ) with instead of , satisfies .it follows from ( [ e : fina ] ) that now , using ( [ e : fina4 ] ) we see that in order to find the value of we should solve the following quadratic equation : ^ 2\phi_1(k ) \nonumber \\ & \quad-2\sqrt{\log k}\left[(u_k)^{-1}(g(k))\right]^2=0 .\label{e : fina5}\end{aligned}\ ] ] solving ( [ e : fina5 ] ) and taking into account ( [ e : fina4 ] ) , we obtain where is defined by ( [ e : fina7 ] ). we will next check that the function defined by ( [ e : fina6 ] ) is admissible , that is , condition ( [ e : constant ] ) holds for . recall that , and hence ( [ e : r2 ] ) gives follows from the definition of the function ( see ( [ e : fina ] ) ) that thus it is not hard to see that is a bounded function .indeed , ( [ e : ogran1 ] ) implies that where is a positive constant .therefore , and thus the function is bounded for large values of .now , using ( [ e : ogran1 ] ) , we see that and hence , where is the constant appearing in ( [ e : constant ] ) for the function .it follows from lemma [ l : estime ] and the equality that where and is given by ( [ e : fina7 ] ) .therefore , to get a lower estimate for , we will reason similarly .the only difference here is that we replace the equation by the equation , where is defined by ( [ e : gg ] ) , is fixed , and is a function such as in ( [ e : fina ] ) , but with an unknown function instead of the function .next , we can prove that where the function is given by ( [ e : final2 ] ) . moreover , the function is bounded for large values of , and therefore , where is the constant appearing in ( [ e : constant ] ) for the function .let us set with given by ( [ e : final2 ] ) .it follows that and hence now , it is clear that formula ( [ e : wildi3 ] ) in theorem [ t : ochen ] follows from ( [ e : r1 ] ) , ( [ e : wild1 ] ) , ( [ e : wild2 ] ) , ( [ e : wildi1 ] ) , and ( [ e : wildi2 ] ) .this completes the proof of theorem [ t : ochen ] .we will next estimate the difference between the lower and the upper estimates for the implied volatility in formula ( [ e : wildi3 ] ) .first , note that since the function is eventually bounded , formula ( [ e : fina4 ] ) shows that the function is also eventually bounded .similarly , the function is eventually bounded .it follows from ( [ e : fina7 ] ) and ( [ e : final2 ] ) that the functions and , are equivalent to the function near infinity .therefore , as .to estimate the difference , we observe that the function is lipschitz on every proper subinterval of the interval with the lipschitz constant independent of on every such interval . now, it is not hard to see , using ( [ e : gg ] ) , ( [ e : fina7 ] ) , and ( [ e : final2 ] ) that \right)=o\left(\frac{1}{\log k}\right)\ ] ] as .next , by taking into account ( [ e : smale ] ) , we obtain as .[ t : imvol ] the following formula holds for the implied volatility as : in ( [ e : imvol1 ] ) , the function is defined by ( [ e : fina7 ] ) .theorem [ t : imvol ] follows from theorem [ t : ochen ] and ( [ e : imvol ] ) .[ r:1 ] note that the -estimate in formula ( [ e : imvol1 ] ) depends on .more precisely , formula ( [ e : imvol1 ] ) should be understood as follows . for every thereexist and such that for all ._ proof of theorem [ t : corrf ] ( continuation ) .expanding the function near zero , we obtain as .now , using ( [ e : expan ] ) and the fact that the function grows like , we see that as .moreover , ( [ e : fina7 ] ) and ( [ e : expan ] ) imply that ^ 2 \nonumber \\ & \quad+\sqrt{2}{\sqrt{\log k}}(u_k)^{-1}(g(k))\sqrt{1+\frac{\left[(u_k)^{-1}(g(k))\right]^2 } { 2\log k } } \nonumber \\ & = \sqrt{2}(u_k)^{-1}(g(k))\sqrt{\log k}+\left[(u_k)^{-1}(g(k))\right]^2 \nonumber \\ & \quad+\frac{\sqrt{2}\left[(u_k)^{-1}(g(k))\right]^3}{4\sqrt{\log k } } + o\left((\log k)^{-\frac{3}{2}}\right ) \label{e : imvol3}\end{aligned}\ ] ] as .next , combining ( [ e : ex1 ] ) and ( [ e : imvol3 ] ) , we see that ^ 2}{2(\log k)^{\frac{1}{2 } } } \nonumber \\ & + \frac{\sqrt{2}\left[(u_k)^{-1}(g(k))\right]^3}{8\log k}-\frac{\left[(u_k)^{-1}(g(k))\right]^2 } { 4(\log k)^{\frac{1}{2}}}-\frac{\sqrt{2}\left[(u_k)^{-1}(g(k))\right]^3}{4\log k } \nonumber \\ & + \frac{\sqrt{2}\left[(u_k)^{-1}(g(k))\right]^3}{8\log k}+o\left((\log k)^{-\frac{3}{2}}\right ) \nonumber \\ & = ( \log k)^{\frac{1}{2}}+\frac{\sqrt{2}}{2}(u_k)^{-1}(g(k))+\frac{\left[(u_k)^{-1}(g(k))\right]^2}{4}(\log k)^{-\frac{1}{2 } } \nonumber \\ & + o\left((\log k)^{-\frac{3}{2}}\right ) \label{e : imvol4}\end{aligned}\ ] ] as .now , it is clear that ( [ e : imvol2 ] ) follows from ( [ e : imvol1 ] ) and ( [ e : imvol4 ] ) . _this completes the proof of theorem [ t : corrf ] .in the present section , we explain how to derive the asymptotic formula for the left wing of the implied volatility due to de marco , hillairet , and jacquier from our formula ( [ e : imvol2 ] ) .note that formula ( [ e : imvol2 ] ) is very sensitive to even small changes .such changes often produce errors of order as .the next statement is essentially the result obtained in theorem 3.7 in .[ c : fina ] let .then in ( [ e : dmhjf ] ) , the function satisfies the following condition : where _ proof .our first goal is to replace the expression in formula ( [ e : imvol2 ] ) by the expression , and estimate the error . for the sake of shortness, we put _ [ l : ttu ] let .then the following asymptotic formula is valid as : where }{4\sqrt{t}}(\log u)^{-\frac{1}{2}},\ ] ] and is defined by ( [ e : ppf ] ) .lemma [ l : ttu ] follows from theorem [ t : corrf ] and ( [ e : ppf ] ) .the next lemma provides an estimate for the function .[ l : smallch ] the following formula holds : where the function is given by ( [ e : limsup ] ) .let us first assume that .this assumption is equivalent to the following : .then , using ( [ e : fina3 ] ) , we see that for , we have therefore , for , and since the mean value theorem implies that where next , using ( [ e : r2 ] ) and ( [ e : m ] ) , we see that for every there exists such that moreover , ( [ e : r2 ] ) , ( [ e : m ] ) , and the mean value theorem imply that where the function is positive and satisfies the following condition : _ next , taking into account formulas ( [ e : l1 ] ) - ( [ e : pkk ] ) , we see that lemma [ l : smallch ] holds under the condition .it remains to prove lemma [ l : smallch ] in the case where .the previous condition means that .fix such that .in addition , fix so large that the following inequalities hold : and next , taking into account ( [ e : i1 ] ) , we assume that then , using ( [ e : i2 ] ) , we obtain in addition , provided that it follows from ( [ e : i1 ] ) and ( [ e : inver1 ] ) that the number satisfies the previous condition .therefore , moreover , ( [ e : itfo ] ) and the mean value theorem imply that for , where is determined from ( [ e : l1 ] ) , and it is easy to see that therefore , for every there exists such that the inequality in ( [ e : ll ] ) holds for all .now , the proof of lemma [ l : smallch ] in the case where can be completed exactly as in the case when . finally , it is not hard to see that corollary [ c : fina ] follows from lemmas [ l : ttu ] and [ l : smallch ] .let us assume and .we will next compare the numbers and appearing in formula ( [ e : imvol22 ] ) in corollary [ c : sims ] and in the de marco - hillairet - jacquier formula ( [ e : dmhjf ] ) , respectively . recallthat if , then the number is defined for all . on the other hand , if , then is defined under the additional restriction it is clear that if , then and are positive numbers for all . if , then we have and . the remaining case where is interesting . in this case , the number is negative , while the sign of the number can be positive or negative .we will next clarify the previous statement .[ l : moreint ] suppose condition ( [ e : spec1 ] ) holds. then the following are true : 1 .let the number be such that then .2 . let the number be such that then .3 . let the number be such that then .[ r : price ] note that in the case described in part ( 1 ) of lemma [ l : moreint ] , the numbers and have opposite signs .the proof of lemma [ l : moreint ] is simple , and we leave it as an exercise for the reader . the next assertion characterizes the limiting behavior of the difference . [ t : estims ] let . then =1 .\label{e : forest}\ ] ] [ r : u ] for , the expression exists if ( see formula ( [ e : accord ] ) ) .note that for every fixed with , condition ( [ e : suf ] ) holds for sufficiently small values of .this explains how we should understand formula ( [ e : forest ] ) for ._ proof of theorem [ t : estims ] .suppose , and set and _ [ l : aux ] for all , where , , and are given by ( [ e : notka1 ] ) and ( [ e : notka2 ] ) ._ proof of lemma [ l : aux ] . using the mean value theorem, we see that where .it follows from ( [ e : mvt1 ] ) that now , the estimates in lemma [ l : aux ] follow from ( [ e : itfoo ] ) and ( [ e : thef1 ] ) ._ let us continue the proof of theorem [ t : estims ] .lemma [ l : aux ] implies that \nonumber \\ & \le\exp\left\{\frac{b_k^2-a^2}{2}\right\}. \label{e : thef2}\end{aligned}\ ] ] since as , formula ( [ e : forest ] ) follows from ( [ e : thef2 ] ) .the remaining part of the proof of theorem [ t : estims ] resembles that of the second part of lemma [ l : smallch ] .let us assume .then we have .fix such that and suppose is such that and let then we have therefore , provided that since the previous estimates hold for the number , we have . here and are defined by ( [ e : notka1 ] ) , and next , using ( [ e : ess0 ] ) and ( [ e : ess2 ] ) , we obtain .it follows from ( [ e : thef1 ] ) and from the inequalities that finally , it is not hard to see that formula ( [ e : forest ] ) with can be derived from the previous estimates and from the equality .[ r : eshchio ] theorem [ t : estims ] explains why the error term in the de marco - hillairet - jacquier formula is worse than that in formula ( [ e : imvol22 ] ) .the constant elasticity of variance model ( the cev model ) is described by the following stochastic differential equation : where , , and .if , then the boundary at is naturally absorbing , while for , we impose an absorbing boundary condition .the cev model was introduced by j. c. cox in ( see also ) .a useful information about the cev model , including some of the results formulated below , can be found in .the cev process is used in the financial industry to model spot prices of equitites and commodities ( see , e.g. , , and the references therein ) .fix .then we have moreover , the density of the absolutely continuous part of the distribution of is as follows : in ( [ e : cev1 ] ) , is the normalized incomplete gamma function given by while in ( [ e : cev2 ] ) , the parameter is defined by , the function is the modified bessel function of the first kind , and the constant is given by it is known that as , for all . therefore , it follows from ( [ e : cev2 ] ) that as , where ^{\frac{1}{2(1-\rho)}}\gamma\left(\frac{3 - 2\rho}{2(1-\rho)}\right ) } \nonumber \\ & \quad\exp\left\{-\frac{1}{2t\sigma^2(1-\rho)^2}\right\}. \label{e : cev4}\end{aligned}\ ] ] now , taking into account ( [ e : cev3 ] ) and ( [ e : cev4 ] ) , we see that as , therefore , condition ( [ e : ost ] ) holds . finally , applying corollary [ c : sims ] , we derive the following statement .[ c : cocor ] formula ( [ e : imvol22 ] ) with given by ( [ e : cev2 ] ) holds for the implied volatility in the cev model .[ r : finn ] propositions , similar to corollary [ c : cocor ] , can be established for many other models besides the cev model , e.g. , jump - to - default models , and models described by processes stopped at the first hitting time of zero . to apply corollary [ c : sims ] to a model with atoms , we only need to know the value of and to estimate the rate of decay of the asset price density near zero .such information is provided in for some of the models mentioned above .the figures included in the present section illustrate the performance of two asymptotic formulas , providing approximations to the left wing of the implied volatility in the cev model : the de marco - hillairet - jacquier formula and formula ( [ e : imvol22 ] ) established in the present paper .the values of the cev parameters in figures 1 and 2 are chosen as follows : , , , and . under the previous assumptions ,the value of the mass at zero is approximately equal to . in figures 1 and 2, the independent variable is the log - moneyness given by .the large blue stars in figure 1 show the monte carlo estimate of the function to plot the graph of the function represented by blue stars , monte carlo simulations with paths were used , each drawn with 100 time steps .the solid black curve in figure 1 depicts the full smile approximation using all the three terms in formula ( [ e : imvol22 ] ) .furthermore , the graph in black dashes corresponds to the smile approximation based on formula ( [ e : imvol22 ] ) with 2 terms , while the graph in black crosses represents the de marco - hillairet - jacquier approximation .figure 2 shows the approximation errors .even superficial observations of the graphs in figures 1 and 2 show that formula ( [ e : imvol22 ] ) provides a better approximation to the left wing of the implied volatility in the cev model than the de marco - hillairet - jacquier formula .note that the graph of the monte carlo estimate of the function defined in ( [ e : mont ] ) and the graph of the approximation to this function based on formula ( [ e : imvol22 ] ) match rather well .applied mathematics series 55 , national bureau of standards , washington , 1972 .finance 19 ( 2009 ) , 1 - 12 .prob . 45 ( 2008 ) , 16 - 32 . in : cont , r. ( ed . ) , _ frontiers in quantitative finance : volatility and credit risk modeling , _ wiley , hoboken , 2009 , 19 - 45 .preprint , 2010 .working paper , stanford university , 1975 , ( reprinted in journ .portfolio management 22 ( 1996 ) , 15 - 17 ) .financial economics 3 ( 1976 ) , 145 - 166 .preprint , 2013 , available at arxiv:1310.1020v1 .futures markets 31 ( 2011 ) , 230 - 250 . to appear in finance and stochastics; available at ssrn.com/abstract=1768383 .alternative investments 11 ( 2009 ) , 65 - 84 .springer verlag berlin heidelberg , 2012 .finance 15 ( 2012 ) , 1250020 .siam journ .math . 1 ( 2010 ) , 609 - 641 .finance 14 ( 2004 ) , 469 - 480 . | we consider the asymptotic behavior of the implied volatility in stochastic asset price models with atoms . in such models , the asset price distribution has a singular component at zero . examples of models with atoms include the constant elasticity of variance model , jump - to - default models , and stochastic models described by processes stopped at the first hitting time of zero . for models with atoms , the behavior of the implied volatility at large strikes is similar to that in models without atoms . on the other hand , the behavior of the implied volatility at small strikes is influenced significantly by the atom at zero . s. de marco , c. hillairet , and a. jacquier found an asymptotic formula for the implied volatility at small strikes with two terms and also provided an incomplete description of the third term . in the present paper , we obtain a new asymptotic formula for the left wing of the implied volatility , which is qualitatively different from the de marco - hillairet - jacquier formula . the new formula contains three explicit terms and an error estimate . we show how to derive the de marco - hillairet - jacquier formula from our formula , and compare the performance of the two formulas in the case of the cev model . the resulting graphs show that the new formula provides a notably better approximation to the smile in the cev model than the de marco - hillairet - jacquier formula . |
synthetic fingerprint generation , minutiae distribution , transportation problem , earth mover s distance , fingerprint reconstruction , identification test , fingerprint individuality ._ at the fifteenth international conference on pattern recognition held in september 2000 , about 90 participants , most of them with some background in fingerprint analysis ( quoted from page 293 in ) were shown four fingerpint images ( reproduced in figure [ figsurvey ] ) .they were told that three images were acquired from real fingers and one of these prints was synthetically generated and they were asked to identify the artificial print .only 23% of the participants correctly chose the synthetic print ( image ( a ) in figure [ figsurvey ] ) . __ in this paper , we present a method that is able to perform the task at which the human experts failed : we introduce a test of realness which can separate real from synthetic fingerprint images with very high accuracy .we applied the method ( a detailed description follows in section [ histograms : scn ] and [ discrimination : scn ] ) to these four images and the proposed method distinctly identifies the correct image ( computing the difference of minutiae histogram distances explained in section [ emddiff ] yields a negative score for image ( a ) in figure [ figsurvey ] and positive scores for images ( b - d ) ) . _today , many commercial , governmental and forensic applications rely on fingerprint recognition for verifying or establishing the identity of a person . among these , methods building on minutiae matching play an eminent role . usually , matching routines are not only tested on real fingers , but in order to provide for theoretically unlimited sample sizes , synthetic fingerprint generation systems such as sfinge by cappelli _( ) have been developed in the past .independently , methods have been developed to reconstruct fingerprints from minutiae templates ( cf .both methodologies are very relevant in many application areas : * constructing synthetic fingerprint images facilitates the cheap creation of very large databases for testing and comparing the performance of algorithms in verification and identification scenarios . *ground truth data is provided for evaluating the performance of forensic experts as well as minutiae extraction algorithms . * matching performance of low - quality and latent fingerprintsis improved .* fingerprint reconstruction can be a building block for solving interoperability problems , e.g. on comparing fingerprints acquired from different sensors .* research in this area raises the awareness for aspects of security , privacy and data protection bearing in mind that an attacker may utilize existing techniques for creating a spoof and prepare a presentation attack . * mixing prints of two or more real fingers has been proposed for generating virtual identities , obscuring private information or creating cancelable templates . of course , synthetic prints should be as real as possible pertaining to all properties and features which are relevant for fingerprint recognition , especially with respect to their minutiae distribution .otherwise , a human may be fooled by the look of a synthetic print ( see the motivational example given before the introduction taken from page 293 in ) , but their eligibility e.g. for evaluating fingerprint recognition algorithms may be challenged and results obtained on artificial databases would be insignificant . [ [ a - unifying - concept - of - the - correct - minutiae - distribution . ] ] a unifying concept of the correct minutiae distribution . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + fingerprint synthesis and fingerprint reconstruction have been treated for a long time as different tasks .this can be well conceived on the background that the issue of realistic minutiae distributions has only played a subordinate role in theoretical model building and practical research . in our contributionwe provide for a simple method to assess minutiae distributions of single fingerprints as well as of samples of fingerprints .we demonstrate that after training , this allows to decide that minutiae patterns of synthetically generated fingerprints are not correct. in particular , we believe that including realistic minutiae distributions leads to a unified concept in which synthetic fingerprint generation and fingerprint reconstruction are in fact two sides of the same coin .we will return to this point in section [ labeltwosidesofthesamecoin ] .[ [ fingerprints - of - fingerprints . ] ] fingerprints of fingerprints .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + minutiae histograms ( mhs ) introduced below assign any given fingerprint image a fixed length feature vector .this feature vector is not only highly potent to discriminate real fingerprints from fingerprints synthetically generated by the only publicly available current state of the art system , in a preliminary study we additionally demonstrate that this new feature vector is also highly discriminatory among real fingerprints as well .given reliably extracted minutiae and sufficent overlap of latent fingerprint images , mhs allow for fast and effective matching .more precisely , it seems that already the second order mhs reflecting minutiae pairs only , promise a high potential awaiting to be yet unleashed .note that higher order mhs can be assessed via minutiae triplets , quadruplets etc .while local minutiae statistics are widely used for matching , our contribution is to consider global minutiae statistics and measure distances by a metric that effectively captures human perception : the earth mover s distance .as made clear above , in our contribution we focus on the role of minutiae distributions .this issue is currently gaining momentum in the scientific community , cf .in contrast to , who use parametric mixture models , we take a nonparametric statistical approach .we begin our exposition with biological principles currently believed to govern minutiae formation and their distribution . to date ,however , these principles are still not satisfactorily understood .subsequently , we put the sfinge method into context with these biological hypotheses and describe an alternate method based on real fingerprint images for generating latent fingerprints , contrasting in this and other aspects .thereafter , we allude to the similarities between fingerprint generation and reconstruction which will resurface when we propose improvements in synthetic fingerprint generation .we conclude this section by laying out the plan of the paper .[ [ fingerprint - formation - guided - by - merkel - cells . ] ] fingerprint formation guided by merkel cells .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + kcken and champod propose a model for fingerprint formation that has two major influence factors : growth forces which create mechanical compressive stress on the one hand , and merkel cells rearranging from a random initial configuration into lines minimizing the compressive stress and inducing primary ridges on the other hand .merkel cells interact with each other in reaction - diffusion systems of short range attraction and long range repulsion . based on empirical evidence from embryonic volar tissue evolution ( e.g. ) , kcken and champodlet solutions of suitable partial differential equations propagate from three centers : one along the flexion crease , one at the volar pad ( the core area ) and one from the nail furrow . based on specific parameter choices, this process eventually forms a ridge line pattern featuring minutiae .kcken and champod analyzed the images resulting from simulation runs of their growth model and discovered qualitative differences in comparison to real fingerprints .they conclude that with respect to the natural variability of arrangements of minutiae only an empirical acquisition of genuine fingerprints will provide an adequate source of data .[ [ sfinge . ] ] sfinge .+ + + + + + + in a nutshell , growing fingerprint patches containing no minutiae are generated starting from a number of randomly located points by the iterative application of gabor filters according to a previously generated global orientation field and a non - constant ridge frequency pattern .whenever patches meet , minutiae are produced whenever necessary for the consistency of the global ridge pattern . a detailed description of the sfinge method by cappelli __ can be found in , chapter 6 in and chapter 18 in . in fact ,the sfinge model silently assumes biological hypotheses of fingerprint pattern formation that are slightly different from the ones described above .first , fingerprint patterns no longer propagate from a well defined system of three original sources but rather from a multitude of sources at random locations .secondly , a main governing principle for minutiae creation lies in the compatibility of ridge patterns whenever growing patches touch .these hypotheses in itself are very interesting and can be viewed as intuitively natural ; and their validity can be assessed with our methodology . our test of realness ( section [ sectestrealness ] ) yields ,however , that these hypotheses explain the process of minutiae formation not satisfactorily ( section [ secresults ] ) .notably , this shortfall of sfinge can not be explained by imprint distortions ( which are already included in the sfinge model ) and the lack of a true 3d model , as both would account mostly for neighboring bin perturbations or none at all .using the earth mover s distance , precisely distinguishes between these minor perturbations and the major deviations we found ( for a detailed discussion cf . ) .[ [ further - methods . ] ] further methods .+ + + + + + + + + + + + + + + + further methods for generating synthetic fingerprint images were proposed in 2002 by araque et al . and in 2003 by bicz .the methods of araque et al .and sfinge are similar in spirit and they both rely on the global orientation field model of vizcaya and gerhardt for creating an orientation field .examples of synthetic fingerprints generated by this method are shown in figure [ figsynthetic ] ( a ) and ( b ) .viewing fingerprints as holograms ( cf . ) , a given global orientation field yields the continuous phase onto which so called spiral phases can be added giving minutiae at specific predefined locations while other minutiae occur due to continuity constraints .this allows for synthetic fingerprint generation and four example images generated by the software of bicz are displayed in figure [ figsynthetic ] ( c ) to ( f ) .results reported in section [ secresults ] show that the method proposed in this paper can also discriminate between synthetic fingerprints according to bicz and real fingerprint images . + + [ [ creating - forensic - fingermarks - from - real - prints . ] ] creating forensic fingermarks from real prints .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the approach by rodriguez_ et al . _ , since the minutiae are based on real fingers , does not suffer from unrealistic minutiae configurations .it is however subject to increased time , money and data protection constraints .forensic fingermarks ( latent fingerprint images ) are simulated in a semi - automatic way : fingers of volunteers are recorded using a livescan device . during a period of about 30 seconds , each person performs a series of predefined movements which results in fingerprint images with various distortions .images are captured at a rate of four frames per second .minutiae are automatically extracted followed by manual inspection and correction of falsely extracted or missed minutiae . starting with this ground truth data set, latents are simulated using a region containing a cluster of 5 - 12 minutiae from the real fingerprint images .one use case for these simulated latents is to test the minutiae marking performance of human experts which can be evaluated against the avaivable ground truth information .another application area is benchmarking the identification performance of afis software . [[ reconstruction . ] ] reconstruction .+ + + + + + + + + + + + + + + various researches have shown that fingerprints can be reconstructed from minutiae templates .there are two major approaches for the automatic reconstruction of fingerprint images : first , the iterative application of gabor filters as in sfinge , and second , the usage of amplitude- and frequency - modulated ( am - fm ) functions . in both cases ,the first step is the estimation of an orientation field which fits the minutiae pattern .fingerprint reconstruction from minutiae templates and the generation of synthetic images follow similar principles .however , the goal in the reconstruction scenario is to produce the same number of minutiae at the same locations and with same direction and type as in the template . [ [ synthetics - vs .- spoofs . ] ] synthetics vs. spoofs .+ + + + + + + + + + + + + + + + + + + + + + the mh based method proposed in this paper does not perform liveness detection . in fact, we would like to stress that if an attacker can produce a high quality spoof containing the same minutiae as the imitated alive finger and puts this spoof finger on fingerprint sensor , the proposed method will classify the acquired fingerprint image as originating from a real finger with respect to the minutiae distribution . only attacks with spoofs based on synthetic images and synthetic minutiae distributions are addressed in this work . in the next section ,we introduce second order extended minutiae histograms ( mhs ) which for this paper we just call minutiae histograms : statistics of pairs of two minutiae are used in combination with minutiae type information and interridge distances to obtain a fingerprint of fingerprints . in section [ discrimination :scn ] , extended mhs are applied for classifying a fingerprint into one of the two categories , real or synthetic .tests on the 12 publicly available databases of fvc2000 , fvc2002 and fvc2004 show the discriminative power of this approach . in section [ secimprovement] , detailed suggestions for the generation of more realistic synthetic fingerprints are given . in section [ secidentification ] , the suitability of minutiae histograms for identification purposes is investigated and in section [ secindividuality ] , mhs are proposed for the quantification of fingerprint evidence .section [ discussion : scn ] conludes with a discussion and states topics of further research . + [cols="^,^,^",options="header " , ] [ secresults ] & & & & + & set ii & set iii & set ii & set iii & set ii & set iii & set ii & set iii + fvc2000 & 93.3 & 90.0 & 95.0 & 83.8 & 98.3 & 95.0 & 95.5 & 89.6 + fvc2002 & 86.7 & 88.8 & 90.0 & 86.3 & 85.0 & 78.8 & 87.2 & 84.6 + fvc2004 & 81.7 & 55.0 & 81.7 & 67.5 & 90.0 &70.0 & 84.5 & 64.2 + + & set ii & set iii + & 83.3 & 92.5 + + & & & & + fvc2000 & 100.0 & 92.5 & 100.0 & 87.5 & 100.0 & 97.5 & 100.0 & 92.5 + fvc2002 & 100.0 & 97.5 & 100.0 & 95.0 & 100.0 & 90.0 & 100.0 & 94.2 + fvc2004 & 95.0 & 72.5 & 90.0 & 83.8 & 98.3 & 97.5 & 94.4 & 83.6 + the results on all available fvc databases show that the proposed method by extended 2d - mhs is able to separate real from synthetic prints with very high accuracy . on the training sets of all databases of fvc2000 and fvc2002 ,the classification performance of the combined feature set was 100% , and for the corresponding test sets , the performance was in the range from 87.5 to 97.5% . on the whole ,the image quality in the databases of fvc2004 is clearly lower compared to the quality of images in previous competitions .hence , it it is more challenging to avoid errors during the automatic extraction of minutiae from theses images .we used a commercial off - the - shelf software for minutiae extraction .the good discriminative power of 2d - mhs alone is not surprising upon inspection of a 2d visualization by multidimensional scaling ( e.g. ( * ? ? ?* chapter 14 ) ) of the mutual 2d - mh distances from table [ tabavgemd ] in figure [ minutiaeangledist - hist - wassersteinmds : fig ] . in the left display ,mhs from real fingers cluster in the middle of the left side while mhs from synthetic fingerprints come to lie in the upper right ( for the year 2000 ) and closer to the center towards the bottom ( for the years 2002 and 2004 ) .in fact , we can see that the algorithms leading to minutiae formation have obviously undergone changes between the years 2000 and 2004 , however , only moving the mhs moderately closer to realness. in fvc2002 , sfinge version 2.51 was used and in fvc2004 sfinge version 3.0 .additionally , we tested the performance of 2d - mhs for discriminating between artificial fingerprint images generated by the software of bicz and real fingerprint images . to this end , we generated 110 synthetic fingerprints which were grouped into three sets as listed in table [ tabsets ] , and compared them against the 110 images of fvc2000 database 1 . on setii , a classification performance of was achieved and on set iii , of the images were correctly classified .recall from section [ secconstructionandreconstruction ] that the sfinge model silently extends the well - standing biological hypothesis of fingerprint pattern formation due to three converging ridge systems ( for a brief discussion , c.f . ) , to a multitude a converging ridge systems starting at random locations ; this process is the governing principle for minutiae formation .our work shows that this remarkable hypothesis can be tested and the tests show that the minutiae pattern formation appears to be of a more complex nature .it would be of interest to test fingerprint images generated by , and other reseachers using extended minutiae histograms , once they become available .one possibility to improve sfinge is to modify the fingerprint generation process in such a way that the resulting fingerprint images have the following properties : * mhs of synthetic prints should follow the distribution of mhs estimated from a database of real fingers . *more subtly also considering minutiae types , the relation of endings to bifurcations should resemble the relation and its distribution observed in real fingers .* similarly , the distribution of interridge distances should be similar to those of real fingers acquired at the same resolution .[ [ labeltwosidesofthesamecoin ] ] two sides of the same coin + + + + + + + + + + + + + + + + + + + + + + + + + + an alternative option for the creation of synthetic fingerprints that pass the proposed test of realness is to consider synthetic fingerprint generation as a reconstruction task .traditionally , the construction of artificial fingerprints is associated with an unknown outcome regarding the number , location , direction and type of minutiae , whereas reconstruction aims at the generation of a fingerprint image which has best possible similarity in terms of minutiae properties to a given template .basically , we propose to focus on the generation of a realistic minutiae template and `` all these other things above shall be added unto '' by the existing reconstruction methods . here is a possible outline of this approach : * first , a feasible foreground and orientation field is constructed , e.g. using a global model like which is able to incorporate the empirical distribution of observed pattern types ( henry - galton classes ) and singular points . *secondly , a realistic number of minutiae is drawn from the empirical data of minutiae in fingerprints with the previously determined foreground size and pattern type .* thirdly , an inital minutiae template is obtained by choosing points e.g. randomly on the foreground as minutiae locations and for each location , the minutiae direction is set to the local orientation or by chance . *fourthly , the 2d - mh of the minutiae template is computed and it is modified iteratively until it passes the test of realness , i.e. the emd between the current template and the average 2d - mh of real fingerprints is below an acceptable threshold .modification operations are the deletion and addition of minutiae and flipping of the minutiae direction by 180 .the implementation for the computation of the emd allows to analyze the flow of mass , so that the bins in the 2d - mh can be indentified which contribute above the ordinary to the total costs , and thus , the minutiae pairs that are the most unlikely in comparison to the empirical distribution . *fifthly , minutiae types ( ending and bifurcation ) are assigned , based on the empirical distribution in real finger patterns . * finally , the fingerprint image is reconstructed using e.g. gabor filters or the am - fm model .the interridge distances are checked for deviations from the empirical data obtained from real fingerprints and if required , the interridge distance image is adjusted and the reconstruction step is repeated . *optionally , noise can be simulated for copies of the constructed image , and if desired , they can be rotated , translated and nonlinearly distorted ., title="fig:",scaledwidth=45.0% ] , title="fig:",scaledwidth=45.0% ] up to this point , we applied minutiae statistics for classifying a fingerprint as real or synthetic . in this section, we explore the potential of mhs for identifying individuals .the approach described previously based on mhs allows to represent a fingerprint as a fixed - length feature vector . obtaining a fixed - length feature vector representation marks the grand goal for biometric template protection schemes andhas previously been achieved by xu __ in their spectral minutiae representation .robust fixed - length feature vectors also appear highly promising for an identification scenario in which a large database is searched for a query fingerprint . up to this point ,the minutiae distributions represented by mhs were compared using the earth mover s distance . for the purpose of identification ,we compute the histogram intersection to better handle partial overlap .this approach is related to `` geometric hashing '' : a hash table is built by quantizing geometric objects like minutiae triplets which were used in . in a first test on fvc2000 db2 , we achieved average access rates ( average part of the database that is accessed using an incremental search strategy until the corresponding finger is found ) between 2.33% and 4.01% using unnormalized 4d - mhs with 20 bins for differences between minutiae directions , 20 bins for euclidean distances between minutiae locations , 20 bins for the angle of the relative location of the second minutiae with respect to the first and 4 bins for minutiae type combinations .we chose this database , because it was used in tests measuring the indexing performance by other researchers : de boer _ et al ._ tested three different features ( orientation field , finger code , minutiae triplets ) and their combination on this database and they reported rates between 1.34% and 7.27% .proposed minutiae cylinder code and locality - sensitive hashing for fingerprint indexing and report an average access rate of 1.72% for this database .as we used , in contrast to these two studies , an `` off the shelf '' minutiae extractor ( ) , and as the performance hinges on correct minutiae extraction ( see figure [ figindex ] and the discussion below ) we deem our results very promising upon more `` correct '' minutiae extraction . in our test , we used the sum of intersections between corresponding bins as a score ( bis = bin intersection score ) and sorted the list of fingers in the database in descending order . in 597 out of 770 searches ( 77 ) ,the correct finger was ranked first in the list . if the search is narrowed down to templates with 30 or more minutiae , than it ranked first in 393 out of 441 searches ( 89 ) .we inspected the cases in which a larger portion of the database were accessed .it turned out that the main reason lies in minutiae extraction errors ( missing and spurious minutiae ) which , of course , have a negative impact on the score : missing minutiae reduce the intersection between minutiae histograms of templates from the same finger , spurious minutiae can increase the intersection between histograms from templates of different fingers .a typical example is shown in figure [ figindex ] .another reason lies in almost no overlap areas between the two prints compared .we stress at this point that our bis has been designed to alleviate small overlaps .it is obvious that for a fair comparison between different minutiae - based indexing methods , the same minutiae templates have to be used . in doing so, the influence of different minutiae extractors on the identification performance can be eliminated and only then results become comparable .it is of interest to quantify the impact of different minutiae extractors and different fingerprint image enhancement techniques on the identification performance , but these comparisons are beyond the scope of this study .this first test shows the suitability of mhs for identification purposes and this direction deserves further research .universality , collectability , permanence and uniqueness are properties which render an anatomical or behavioral trait useful for biometric recognition .permanence of the fingerprint pattern was scrutinized by galton more than a century ago and it was later confirmed that the pattern s development is finalized at an estimated gestational age of 24 weeks .uniqueness of fingerprints is commonly assumed by all researchers and practitioners dealing with fingerprint recognition .fingerprint individuality has never been proven , but there is a long history of models attempting to explain and quantify fingerprint individuality starting with galton in 1892 to the present day .stoney gives an overview over 10 major models formulated between 1892 and 1999 in chapter 9 of .for recent additions , please see and the references therein .there are two broad categories of models : first , mathematical models trying to encompass the distribution of features extracted from observed prints . and secondly , biology based models about randomness during the formation of friction ridge skin in prenatal development of human life . notwithstanding the lack of proof of uniqueness, fingerprints have a long success story in commercial , governmental and forensic applications . as a consequence of an on - going reformation process , in the future forensic experts may have to quantify the weight of evidence with probabilities and errors rates instead of a binary decision . by their very nature ,mhs ( 2d- , 4d- , second and higher order ) as empirical realisations of an underlying to date unknown true minutiae distribution precisely yield such probabilities .for instance for a given confidence level , for each true finger multiply represented in a database , a neighborhood in the space of corresponding ( possibly higher order and/or extended ) mhs can be computed , via bootstrapping , say , such that for a query fingerprint with mh under the null - hypothesis that is another imprint of , where is the empirical probability .the use of the corresponding test , of course , relies on prior thorough statistical studies of these true distributions .further research , exploiting the abundance of optional statistical methods , may choose likelihood ratios , say , for improved quantification .a crucial point is that the underlying minutiae statistics have to be based on a large number of real fingers .minutiae should be manually marked or in semi - automatic fashion , a human should inspect automatically extracted minutiae in order to avoid minutiae extraction errors and their influence on the mhs . ideally , interridge distances or related measures like e.g. ridge countwould be taken into account and additional information would be available for minutiae templates , including age , body height , sex and ethnicity . this would enable the computation of more sophisticated statistics , e.g. the probability that a print with certain minutiae histogram stems from a person of certain age group . in summary , mhs can become a useful tool for forensic experts who are requested to quantify the weight of fingerprint evidence in court based on empirical ground truth data .the proposed mhs capture and comprise relevant information of fingerprints , namely the full distribution of minutiae , which enable to separate current state - of - the - art synthetic fingerprints from prints of real fingers .this study reveals a fundamental difference between natural finger pattern formation and state - of - the - art synthetic fingerprint generation processes . as a consequence, any results obtained on existing databases of synthetic fingerprints should be regarded with caution and may not reflect the performance of a fingerprint comparison software in a real - life scenario .a performance evalution of fvc2004 showed that the behavior of the algorithms over db4 ( the database consisting of synthetic prints ) was , in general , comparable to that on the real databases .however , this can also be interpreted as an indicator that the participating algorithms are not yet optimized for the specifics of the empirical distribution of minutiae in real fingerprints .another direction of research beyond the ones discussed in sections [ secimprovement ] to [ secindividuality ] can go towards the identification of additional features for the discrimination between real and synthetic prints . in the context of presentation attack detection, it would be highly desirable to detect a reconstructed fingerprint , even if this property can not be infered from the minutiae distribution .we would also like to explore the suitability of minutiae histograms for security applications , e.g. in combination with the fuzzy commitment scheme or the fuzzy vault scheme , and for the generation of cancelable templates . in this research, we used a second order minutiae statistic reflecting the covariance structure of the minutiae distributions which turned out to be highly discriminatory for our task . exploring the discriminative potential of higher order minutiae histograms is beyond the scope of this paper andremains an interesting endeavor .the authors thank raffaele cappelli and davide maltoni for sharing the images shown in figure [ figsurvey ] , pedro vizcaya for sharing the images displayed in figure [ figsynthetic ] ( a ) and ( b ) , and wieslaw bicz for sharing the software to generate the images in figure [ figsynthetic ] ( c ) to ( f ) .the authors gratefully acknowledge the support of the felix - bernstein - institute for mathematical statistics in the biosciences and the volkswagen foundation . c. m. rodriguez , a. de jongh , and d. meuwly . introducing a semi - automatic method to simulate large numbers of forensic fingermarks for research on fingerprint identification . ,57(2):334342 , march 2012 .q. zhao , a. k. jain , n.g .paulter , and m. taylor .fingerprint image synthesis based on statistical feature models . in _ proc .5th conf . on biometrics : theory , applications and systems ( btas ) _ , pages 2330 , washington , d.c ., september 2012 . c. neumann , c. champod , r. puch - solis , n. egli , a. anthonioz , and a. bromage - griffiths .computation of likelihood ratios in fingerprint identification for configurations of any number of minutiae ., 52(1):5464 , january 2007 .e. gutirrez - redomero , n. rivaldera , c. alonso - rodrguez , l. m. martn , j. e. dipierri , m. a. fernndez - peire , and r. morillo . are there population differences in minutiae frequencies ? a comparative study of two argentinian population samples and one spanish sample . ,222(1 - 3):266276 , october 2012 . | in this study we show that by the current state - of - the - art synthetically generated fingerprints can easily be discriminated from real fingerprints . we propose a non - parametric distribution based method using second order extended minutiae histograms ( mhs ) which can distinguish between real and synthetic prints with very high accuracy . mhs provide a fixed - length feature vector for a fingerprint which are invariant under rotation and translation . this test of realness can be applied to synthetic fingerprints produced by any method . in this work , tests are conducted on the 12 publicly available databases of fvc2000 , fvc2002 and fvc2004 which are well established benchmarks for evaluating the performance of fingerprint recognition algorithms ; 3 of these 12 databases consist of artificial fingerprints generated by the sfinge software . additionally , we evaluate the discriminative performance on a database of synthetic fingerprints generated by the software of bicz versus real fingerprint images . we conclude with suggestions for the improvement of synthetic fingerprint generation . |
the structure of complex systems across various disciplines can be abstracted and conceptualized as complex networks of nodes and links to which many quantitative methods can be applied so as to extract any characteristics embedded in the system .there exist many types of networks and characterizing their topology is very important for a wide range of static and dynamic properties .recently , c. song _ _ et al.__ applied a least box number covering algorithm to demonstrate the existence of self - similarity in many real networks .they also studied and compared several possible least box number covering algorithms , by applying them to a number of model and real - world networks .they found that the least box number covering optimization is equivalent to the well - known vertex coloring algorithm which is a np - hard problem .it implied that any least box number covering algorithms are heuristic algorithms .can we avoid the np - hard problem and simplify the network fractal dimension calculation ? in this paper we modify the random sequential box - covering algorithm and presented a random ball coverage algorithm .the relative error smallest upper bound ( supremum ) is given theoretically .the simulation experiments shows that no matter how large the network diameter is , this upper bound tends to .so we develop another algorithm for calculating dimension which employ only two random ball coverages .we also yield the smallest relative error upper bound of this algorithm , which tend to when the network diameter is large enough . in this point of view , for a proper acceptable error range , the random ball covering algorithm is equivalent to the least box number covering algorithm in statistic sense when a network is large enough .there is no need to focus on the least box number covering optimization problem when we want to calculate dimension of a large diameter network .the least box number coverage and random ball coverage were defined as following . for a given network ,a box with diameter is a set of nodes where all distances between any two nodes and in the box are smaller than .the least box number coverage is the box coverage with the minimum number of boxes required to cover the entire network . in order to correspond to the fractal networkdefinition we use open ball to cover the network .so our random ball coverage is little difference with the random sequential box - covering algorithm .a ball with radius and center node is the set of nodes which satisfy the shortest path length from the center to each of them is smaller than . the random ball coverage with radius as : at each step, we randomly choose a node which has not been covered as a center , and cover all the nodes within the distance to the center .the process is repeated until all the nodes in the network were covered .theorem : , where is the number of boxes in a lest box number coverage with diameter and denotes the number of balls in a random ball coverage with radius .proof : can be regarded as the number of boxes in a random box number coverage with diameter . suppose denote all the boxes in a lest box number coverage , where , and .then we have for all , where denotes empty set . according to the above definition of random ball coverage with radius , without loosing any generalitysuppose the center of the first ball in a random ball coverage process is , then is covered by the first ball and the second random ball s center must lies out of . without loosing any generalitywe also can assume is the center of the second ball , then is covered by the second ball .in this way , we can get that : . theorem : suppose is a fractal network , the available box diameter range is and we employ linear lest squares regression to get the dimension .then the smallest upper bound of the relative error of dimension calculated by random ball coverage is }{(r - m+1)\sum_{i = m}^{r}{(\log{i})^{2}}-(\log\frac{r!}{m!})^2} ] the relative error is }{(r - m+1)\sum_{i = m}^{r}{(\log{i})^{2}}-(\log\frac{r!}{m!})^2} ] where , and satisfy : fig.[relative_error_plot ] show the relationship among .it is not easy to get any conclusion directly about the upper bound from the above expression .so we have done some numerical calculations .it seems that for a given when becomes large , it has a limit .for instance , when =1 or 2 , its limit is about .it implies that no matter how large the network diameter is , the relative error never be lower than .in fact , we could only use two points and to estimate dimension which is named twice - random ball coverage algorithm .the corresponding upper bound of relative error will be tend to when the fractal network diameter tend to infinite .theorem : suppose is a fractal network , the available box diameter range is .we just use and to estimate network dimension. then the estimated dimension value is and the smallest relative error upper bound is .proof : obviously , and , then more details are shown in fig.[relative_error_plot ] . employing random ball coverage algorithm ( twice - random ball coverage algorithm ) , we get the fractal dimension of the world - wide web is , , , which is corresponding to the dimension obtained by c. song _et al._(dimension is ) . from our empirical results ,we find for many networks , and is more reasonable .when we get www network dimension is .sometimes , the available box diameter is not sufficient enough , we can calculate many time and get the dimension .we also test this method in the cellular networks . for each network =diameter , and each ,we perform random coverage times .then we get the average dimension of the whole cellular networks is which is perfect corresponding to the dimension obtained by c. song _et al._(dimension is ) and w. zhou _ et al . _( dimension is ) .because we calculate each times , for any one of the cellular networks , we can get different dimensions . for each cellular networkwe can get an average variance .the maximum average variance of cellular network dimensions is , the average variance .if we use the network dimension which is obtained by times calculation to substitute its real fractal dimension in our above discussion , we get the maximum relative error of time calculations of the cellular networks is .the average maximum relative error is and the average relative error is .the relative errors of empirical results are far less than the theoretical upper bounds respectively .the interesting thing is that , in our theoretical discussion , the upper bound of twice - random ball coverage is less than the upper bound of random ball coverage algorithm .but the empirical results always show the random ball coverage algorithm is better than twice - random ball coverage algorithm .so , we think the random ball coverage algorithm is better than twice - random ball coverage algorithm in practice .moreover , our theorems also can be used to estimate a network s diameter .denotes the diameter of a network , denote the relative errors of random ball coverage algorithm and twice - random ball coverage algorithm.,width=302 ]in this paper , we strictly present the upper bound of the relative error of random ball coverage method in fractal network dimension calculation .and we also yield a simple relative error upper bound of twice - random ball coverage method . for many real - world networks ,when the network diameter is sufficient enough this kind of relative error upper bound will tend to .therefore , if the network is sufficient enough , twice - random ball coverage is equivalent to the leat box number coverage in fractal dimension calculation and calculating fractal network dimension is not a np - hard problem . for the networks which isnot sufficient enough , we can calculate random ball number many times and get the dimension , which is also very effective and accuracy .the above discussions can lead another problem naturally .we also can define random full box coverage .a full box with diameter is a set of nodes , such that any other nodes out of the box is added to the box will make the box diameter larger or equal to .the random full box coverage algorithm with diameter can be defined as : at each step we randomly choose a uncovered node as the first node of the box , and select the uncovered nodes to the box until the box become full .we guess the random full box coverage algorithm is equivalent to the least box number covering algorithm in statistic sense . in the futurewe will do some deep researches about this problem .the authors want to thank chaoming song , qiang yuan for provide some useful information .this work is partially supported by 985 projet and nsfc under the grant no. , no. andno. . 99 r. albert , a .-barabasi , rev .phys . * 74 * , 47 ( 2002 ) .m. e. j. newman , siam rev .* 45 * , 167 - 256 ( 2003 ) .s. boccaletti , v.latora , y. moreno , m. chavez , and d .- u .hwang , physics report .* 424 * , 175 - 308 ( 2006 ) .song c , havlin s and makse h a , nature * 433 * 392 ( 2005 ) .song c , havlin s and makse h a , nature physics * 2 * 275 ( 2006 ) .c. song , l. k. gallos , s. havlin and h. a. makse , j .stat . mech.p03006 ( 2007 ) .j. s. kim , k .-goh , b. kahng , and d. kim .arxiv : cond - mat/0701504 ( 2007 ) . h. jeong , b. tombor , r. albert , z. n. oltvai and a .-barabasi , nature * 407 * 651 - 654 , ( 2000 ) .w. x. zhou , z. q. jiang , d.r sornette .arxiv : cond - mat/0605676 ( 2006 ) . | least box number coverage problem for calculating dimension of fractal networks is a np - hard problem . meanwhile , the time complexity of random ball coverage for calculating dimension is very low . in this paper we strictly present the upper bound of relative error for random ball coverage algorithm . we also propose twice - random ball coverage algorithm for calculating network dimension . for many real - world fractal networks , when the network diameter is sufficient large , the relative error upper bound of this method will tend to . in this point of view , given a proper acceptable error range , the dimension calculation is not a np - hard problem , but p problem instead . key words : fractal network , dimension , random ball coverage pacs : 89.75.hc , 89.75.da , 05.45.df |
the shape of the gamma - ray burst s ( grb s ) ms resolution lightcurves in the batse gamma - ray burst catalog carry an immense amount of information .however , the chaging s / n ratio complicates the detailed comparative analysis of the lightcurves . during the morphological analysis of the grbs a subclass with fast rise - exponential decay ( fred ) pulse shape were observed .this shape is quite attractive because its fenomenological simplicity .here we use a special wavelet transformation with a kernel function based on a fred - like pulse .similar approach have been used by , but their base functions were constructed differently .we have used the discrete wavelet transform ( dwt ) matrix formalism ( e.g. ) : here for an input data vector , the one step of the wavelet transform is a multiplication with a special matrix : \ ] ] where the are the 4-stage fir filter parameters defining the wavelet . to obtain these valueswe require the matrix to be orthogonal ( e.g. no information loss ) , and the output of the even ( derivating - like ) rows should disappear for a constant and for a fred - like input signal .these requirements give two different solutions for : a rapidly oscillating one and a smooth one . in the followingwe ll use the later one .our filter process with the fred wavelet transform consists of the usual digital filtering steps . during the filtering we ll loose some information, however this could be quite small . to demonstrate the efficiency of the algorithm on fig . 1 .we reconstructed the 100 - 320 kev 64ms ligthcurve of batse trigger 0143 from the biggest 5% of the total wavelet coefficients .the excellent reconstruction of each individual pulse is obvious .the wavelet transformation algorithm divides the phase - space into equal area regions . on fig . 2 .the wavelet transform are shown . herethe dark segments are the really important coefficients - however they cover only a small portion of the total area which explains the high efficiency of the reconstruction .for a frequency - like wavelet scale analysis we would like to create a power - spectrum like distribution along the frequency axis. however , one should be careful . in the classical signalprocessing one uses the power spectrum from the fourier - transform , because the signals are electromagnetic - like usually , e.g. the power ( or energy ) is proportional to the square of the signal . herethe lightcurves measure photon counts so the signal s energy is simply the sum of the counts .for this reason we approximate the signal s strength as a sum the magnitude of the coefficients along the given frequency rows .this signal s strength indicate on fig 3 .( batse trigger 0143 ) the maximum power to be around . in each energy channelthe signals are similar ( observe the logarithmic scale ) , because the signal is strong even at high energies ( channel 4 ) . for batse trigger 7343 ( with optical redshift ) one can observe a strong high frequency cutoff : some of the signal s high frequency part is missing .however all the 4 channels are visible , while the maximum power is around .it is interesting to remark that the signal s shape is quite similar to trigger 0143 if that is scaled down by a factor of in frequency .the fred wavelet transform measures the similarity between the different wavelet kernel functions ( here all are fred - based ) and the actual signal . to quantify the similarity we define a magnitude cutoff in the wavelet space so , that the _ reconstructed _ value from the filtered data should be similar to the original values .the value and its error from the photon count statistics could be easily determined from the original ligthcurve . to keep only the important features we define the breakpoint where using a cut - off point it is possible to define a compressed size ( cs ) for a burst : it is the number of bins ( in the wavelet space ) needed to restore the curve at the break .the cs value is a robust measure quantifying the similarity between the fred kernel and the different channels lightcurves .our analysis suggest that all the low energy channels # 1 , # 2 and # 3 behaves similarly , while the high energy ( kev ) channel is different ( which is not very surprising , e.g. ) .4 . shows the _ ratio _ of the cs s against the total count lightcurves cs for channels 2 and 4 .this distributions indicate that the pulse - shapes in the long bursts high energy channel are more fred - like than the lower ones - and this is _ independent _ from the actual fred time - scale !kouveliotou , c. , paciesas , w. s. , fishman , g. j. , meegan , c. a. , & wilson , r. b. 1992 , the compton observatory science workshop , 61 meegan , c. , malozzi , r.s ., six , f. & connaughton , v. 2001 , current batse gamma - ray burst catalog , http://gammaray.msfc.nasa.gov/batse/grb/catalog norris , j. p. , nemiroff , r. j. , bonnell , j. t. , paciesas , w. s. , kouveliotou , c. , fishman , g. j. , & meegan , c. a. 1994 , american astronomical society meeting , 26 , 1333 norris , j. p. , scargle , j. d. , bonnell , j. t. , & nemiroff , r. j. 1998 , gamma - ray bursts , 4th hunstville symposium , 171 press w.h ., teukolsky s.a . , vetterling w.t . ,flannery b.p .1992 , numerical recipes in fortran , second edition , cambridge university press , cambridge | the gamma - ray burst s lightcurves have been analyzed using a special wavelet transformation . the applied wavelet base is based on a typical fast rise - exponential decay ( fred ) pulse . the shape of the wavelet coefficients total distribution is determined on the observational frequency grid . our analysis indicates that the pulses in the long bursts high energy channel lightcurves are more fred - like than the lower ones , independently from the actual physical time - scale . address = laboratory for information technology , etvs university , h-1117 budapest , pzmny p. s. 1./a , hungary address = department of physics , bolyai military university , h-1456 budapest , pob 12 , hungary address = astronomical institute of the charles university , v holeovikch 2 , cz-180 00 prague 8 , czech republic , altaddress = stockholm observatory , albanova , se-106 91 stockholm , sweden address = konkoly observatory , h-1525 budapest , pob 67 , hungary |
following milo et al . , we perform 3- and 4-node motif detection on the transcriptional regulatory networks of _ e. coli _ ( version ) and _ s. cerevisiae _ , using their freely available network data and software ( mfinder version ) . generation of randomized networks from the actual network is performed according to one of three null models : an edge - swapping algorithm , an edge - matching algorithm , and a monte carlo algorithm , all described in detail in . because significance results are similar among models ( cf ._ results _ and ) , emphasis in this note is placed on the edge - swapping algorithm , a markov chain procedure that repeatedly swaps the target nodes between pairs of edges .-scores are computed from the mean and standard deviation of the count of a particular subgraph within an ensemble of at least 1,000 randomized networks .we quantify correlation between the counts of any two subgraphs over the course of a randomization process using mutual information .mutual information captures correlation between two random variables even when a relationship exists that is nonlinear ( unlike , e.g. , the correlation coefficient ) or non - monotonic ( unlike , e.g. , spearman s rho ) . in this study , the counts and of the and subgraphs at each iteration of the edge - swapping process are used to increment a counts matrix from which the joint probability distribution is obtained by normalization . mutual information is computed as where the log is base 2 to give in bits , and and .mutual information is bounded from below by ( as seen in eqn .[ i ] when there is no correlation and the subgraph counts are independent of each other i.e. ) and bounded from above by the smaller of the two variables entropies , where and the analogous expression with are the entropies of the and subgraphs counts respectively . in order to obtain a statistic that can be compared across all subgraph pairs ,we normalize by the average entropy , defining as our measure of correlation .note that , with when and when .we find qualitatively similar results ( cf ._ results _ ) when normalizing by the minimum , instead of the average , entropy .only four 3-node subgraphs are present in the transcriptional network of _e. coli _ , and a -score analysis of the type performed in reveals a curious effect .specifically , with respect to ensembles generated via any of the edge - swapping , edge - matching , and monte carlo algorithms , the -scores of three of the subgraphs ( ids 6 , 12 , and 36 ; cf .[ dict ] ) are either very close or equal to the negative of the -score of the fourth subgraph ( the feed - forward loop , i d 38 ) ; see fig .[ table ] .in fact , as shown in fig . [ overlap ] , the absolute value of the difference in counts within the actual network and counts within a sample randomized network at each iteration of the edge - swapping algorithm is the same among all four subgraphs for the first 1,000 iterations .the interpretation is simple : as detailed in fig .[ toy ] , each time an edge of a feed - forward loop is swapped with an external edge , the feed - forward loop is destroyed and one of each of the other three subgraphs is created ; using subgraph ids we may denote this process as since this process accounts for the overwhelming majority of the changes in count of the latter three subgraphs , there is extremely high correlation among the counts of all four subgraphs in each randomized network , and the magnitudes of their -scores are very close . -scores are calculated with respect to an ensemble of 1,000 randomized networks , generated via the edge - swapping algorithm .plots show the count of each subgraph during the generation of one randomized network .each iteration corresponds to one edge - swap . ] in the actual network and count at each iteration of the edge - swapping algorithm , for the subgraphs in fig .[ table ] .note that all four curves completely overlap . ] to quantify and extend the detection of correlations such as that just described , we use a normalized mutual information measure , as detailed in _methods_. for the cases of 3- and 4-node subgraphs in both the _ e. coli _ and _ s. cerevisiae _ transcriptional networks , the measure ( cf .[ a ] ) is computed between all pairs of subgraphs and that appear during the randomization of a network via the edge - swapping algorithm .[ mi3]-[mi4 ] show the matrices ; the row and column order is determined by summing along either direction and sorting , which tends to group together sets of subgraphs with high pairwise correlations . during randomization of the _ e. coli _ network , a set of four 3-node subgraphs ( ids 6 , 12 , 36 , and 38 ; cf .[ dict ] ) are highly correlated , as shown by the bright 4-by-4 square in fig .the high correlation is simply the result of the effect described in the previous section , in which any of three swaps overwhelmingly converts a feed - forward loop ( i d 38 ) into three other subgraphs ( ids 6 , 12 , and 36 ) .in fact the same set of high correlations in seen during the randomization of the _ s. cerevisiae _ network , as shown by the upper left 4-by-4 square in fig .there are additional correlated sets in _ s. cerevisiae _ :subgraphs 14 , 74 , and 102 are highly correlated as indicated by the bright 3-by-3 square involving these ids in fig .[ mi3]b , and subgraphs 74 and 108 , as well as 14 and 46 , are correlated as indicated by the relatively bright entries at these coordinate pairs in fig .[ mi3]b . respectively ,these correlations are due to the effects ( in the notation of eqn .[ eff1 ] ) of which one may convince oneself with the aid of fig .note that although subgraphs 14 , 102 and 108 participate in the highly correlated effects described here , none changes in number significantly enough upon randomization to be labeled a motif in the _ s. cerevisiae _ network ( subgraphs 46 and 74 do not appear in the actual network , only during the course of the randomization ) .our analysis reveals correlations between counts of 4-node subgraphs as well . as indicated by the bright blocks and off - diagonal elements in fig .[ mi4 ] , several sets of subgraphs are highly correlated during the randomization of both the _ e. coli _ and _ s. cerevisiae _ networks .correlations are less easily interpreted in the 4-node case than in the 3-node case , but one must nonetheless remain aware of such artifacts of the randomization process when identifying subgraphs as motifs .we note that the bi - fan ( i d 204 ) , the 4-node subgraph commonly identified as a motif in a variety of networks including both transcriptional networks studied here , does not exhibit particularly high correlation with any other subgraph under our measure in either the _ e. coli _ or _ s. cerevisiae _ network .we find results qualitatively similar to figs .[ mi3]-[mi4 ] when normalizing by the minimum , instead of the average , entropy in eqn .the technique we describe here can be extended to the detection of subgraphs of any size .[ a ] ) between all pairs of 3-node subgraphs and that appear during the randomization of a network via the edge - swapping algorithm for the transcriptional networks of _ e. coli _ ( a ) and _ s. cerevisiae _ ( b ) .subgraphs are labeled as in fig .the row and column order is determined by summing along either direction and sorting.,title="fig : " ] ( cf .[ a ] ) between all pairs of 3-node subgraphs and that appear during the randomization of a network via the edge - swapping algorithm for the transcriptional networks of _ e. coli _ ( a ) and _ s. cerevisiae _ ( b ) .subgraphs are labeled as in fig .the row and column order is determined by summing along either direction and sorting.,title="fig : " ] ( cf .[ a ] ) between all pairs of 4-node subgraphs and that appear during the randomization of a network via the edge - swapping algorithm for the transcriptional networks of _ e. coli _ ( a ) and _ s. cerevisiae _ ( b ) .subgraphs are labeled as in alon et al.s `` motif dictionary '' .the row and column order is determined by summing along either direction and sorting.,title="fig : " ] ( cf .[ a ] ) between all pairs of 4-node subgraphs and that appear during the randomization of a network via the edge - swapping algorithm for the transcriptional networks of _ e. coli _ ( a ) and _ s. cerevisiae _ ( b ) .subgraphs are labeled as in alon et al.s `` motif dictionary '' .the row and column order is determined by summing along either direction and sorting.,title="fig : " ]by quantifying correlations among subgraph counts during 3- and 4-node motif detection in the transcriptional networks of _ e. coli _ and _ s. cerevisiae _ , we reveal that motifs come in sets : the destruction of a subgraph during the randomization process can be highly correlated with the creation of one or more other subgraphs .the correlations are easily understood in the 3-node case , and we present an information - theoretic tool to extract such correlations in general .it has not escaped our attention that this observation serves as the basis for a more principled clustering of subgraphs based on correlations ( e.g. , by mixture - modeling in which the state of the subgraph count is a mixture of several states , with counts conditionally independent given the state ) .the correlations among subgraphs are artifacts of the algorithm used to generate the ensemble of randomized networks ; although we demonstrate their existence here in the context of only one randomization algorithm , the edge - swapping algorithm , they occur in other commonly used algorithms , as evidenced by mutually consistent effects on the -scores .these findings do not necessarily invalidate the statuses of commonly identified motifs ( it remains the case , for example , that there are significantly more feed - forward loops in the transcriptional network of _e. coli _ than in a random network generated under most any commonly used null model ) ; they do argue , however , that the limitations of the randomization scheme should be fully recognized during the motif finding process . | the identification of motifs subgraphs that appear significantly more often in a particular network than in an ensemble of randomized networks has become a ubiquitous method for uncovering potentially important subunits within networks drawn from a wide variety of fields . we find that the most common algorithms used to generate the ensemble from the real network change subgraph counts in a highly correlated manner , so that one subgraph s status as a motif may not be independent from the statuses of the other subgraphs . we demonstrate this effect for the problem of 3- and 4-node motif identification in the transcriptional regulatory networks of _ e. coli _ and _ s. cerevisiae _ in which randomized networks are generated via an edge - swapping algorithm ( milo et al . , _ science _ * 298*:824 , 2002 ) . we show that correlations among 3-node subgraphs are easily interpreted , and we present an information - theoretic tool that may be used to identify correlations among subgraphs of any size . identifying motifs has become a standard way to probe the functional significance of biological , technological , and sociological networks . a motif is commonly defined as a subgraph whose number of appearances in a particular network is significantly greater than its average number of appearances in an ensemble of networks generated under some null model . the typical null model prescribes an algorithm by which many randomized networks can be produced from the original network ( see , e.g. , milo et al . for a review and comparison of several such algorithms ) . while using an ensemble generated from the actual network often preserves features of the network that are desired for fair comparison ( e.g. the degree distribution ) , this method may also induce unintended correlations in subgraph counts that ultimately influence the labeling of subgraphs as motifs . the purpose of this note is to demonstrate and interpret such correlations in a simple case and describe how mutual information may be used to identify such correlations in general . |
with the rapid growth of wireless data traffic , multi - user interference has become a major bottleneck that limits the performance of the existing wireless communication services .mathematically , we can model such an interference - limited communication system as a multi - user interference channel in which a number of linearly interfering transmitters simultaneously send private data to their respective receivers . exploiting time / space / frequency / code diversity are effective approaches to mitigate / manage multi - user interference .for instance , when the transmitters and/or receivers are equipped with multiple antennas , a joint optimization of the physical layer transmit - receive beamforming vectors for all users can efficiently mitigate multi - user interference ; when all transmitters and receivers are equipped with a single antenna , one way to control / mitigate multi - user interference is to impose certain frequency restrictions or transmission power limits on the frequency resources used by each transmitter . in particular , orthogonal frequency division multiple access ( ofdma ) is a form of multi - carrier transmission and is suited for frequency selective channels and high data rates .this technique effectively decomposes a frequency - selective wide - band channel into a group of non - selective narrowband subchannels ( subcarriers ) , which makes it robust against large delay spreads by preserving orthogonality in the frequency domain .moreover , the ingenious introduction of cyclic redundancy at the transmitter reduces the complexity to only fft processing and one tap scalar equalization at the receiver . conventional ofdma schemes _ preassign _ subcarriers to users in a nonoverlapping way , thus users ( transmitting on different subcarriers ) cause no interference to each other .although the ofdma scheme is well suited to be used in a high - speed communication context where quality of service is a major concern , it can lead to inefficient bandwith utilization .this is because that the preassignment of subcarries can not adapt traffic load and channel fluctuations in space and time ; i.e. , a subcarrier preassigned to a user can not be released to other users even if it is unusable when the user s channel conditions are poor . to adapt these fluctuations and improve the overall system s throughput , ofdma based subcarrier allocation networks such as worldwide interoperability for microwave access ( wimax) and long term evolution ( lte) should be equipped with _dynamic _ subcarrier and power allocation algorithms .in particular , a dynamic ofdma based subcarrier and power allocation algorithm is well suited for the _ dense _ femtocell downlink system , where a large number of femtocells close to each other are deployed in a macrocell .the joint optimization of subcarrier and power allocations for the multi - user ofdma system is a nonconvex problem , therefore various heuristics approaches have been proposed for this problem .very recently , the authors in proposed a dynamic joint frequency and transmission power allocation scheme called enhanced dynamic frequency planning ( edfp ) .edfp is a hybrid centralized and distributed architecture , where a central broker first dynamically partitions subcarriers among neighboring cells so that the long - term cell - edge inter - cell interference is minimized , and then each cell independently allocates subcarriers and transmission power to its users in a way that its total transmission power is minimized .notice that nonconvex optimization problems are difficult to solve in general .however , not all nonconvex problems are hard to handle since the lack of convexity may be due to an inappropriate formulation , and many nonconvex optimization problems indeed admit a convex reformulation . therefore , nonconvexity of the joint subcarrier and power allocation problem for the multi - user ofdma system does not imply that it is computationally intractable ( strongly np - hard in terms of computational complexity theory ) .the aim of this paper is to characterize the computational complexity of the joint subcarrier and power allocation problem for the multi - user ofdma system ; i.e. , to categorize when the problem is ( strongly ) np - hard and when it is polynomial time solvable .in fact , the dynamic spectrum management problem _ without the ofdma constraint_ ( at most one user is allowed to transmit power on each subcarrier ) has been extensively studied in .it is shown in that the dynamic spectrum management problem is ( strongly ) np - hard when the number of subcarriers is greater than two , or when the number of users is greater than one .however , the analysis of these results is highly dependent on the crosstalk channel gains among users ; i.e. , some of the crosstalk channel gains are assumed to be large enough , and some of them are assumed to be zero .we shall see late ( in section [ sec - model ] ) that the magnitude of crosstalk channel gains has no influence on the user s transmission rate if all users are required to transmit power in an orthogonal manner .this makes the two problems ( the dynamic spectrum management problem without the ofdma constraint in and the joint subcarrier and power allocation problem with the ofdma constraint considered in this paper ) sharply different from each other .an interesting result shown in is that the optimal solution of the two - user _ sum - rate _ maximization problem is automatically ofdma if the crosstalk channel gains of the two users on each subcarrier are large enough .in addition , based on the fact that the duality gap of the dynamic spectrum management problem is zero as the number of subcarriers goes to infinity ( regardless of the convexity of the objective function) , dual decomposition algorithms have been proposed in for the nonconvex optimization problem of multi - carrier systems , again without considering the ofdma constraint . in this paper , we focus on the characterization of the computational complexity status of the joint subcarrier and power allocation problem for the _ multi - user ofdma system_. in particular , we consider two formulations of the joint subcarrier and power allocation problem .the first one is the problem of minimizing the total transmission power in the system subject to all users quality of service constraints , all users power budget constraints per subcarrier , and the ofdma constraint .the second one is the problem of maximizing the system utility ( including the sum - rate utility , the proportional fairness utility , the harmonic mean utility , and the min - rate utility ) while the total transmission power constraint of each user , individual power constraints on each subcarrier , and the ofdma constraint are respected .the main contributions of this paper are twofold .first , we show that the aforementioned two formulations of the joint subcarrier and power allocation problem are strongly np - hard . the proof is based on a polynomial time transformation from the -dimensional matching problem .the strong np - hardness results suggest that for a given ofdma system , computing the optimal subcarrier and power allocation strategy is generally intractable .thus , instead of insisting on finding an efficient algorithm that can find the global optimum of the joint subcarrier and power allocation problem , one has to settle with less ambitious goals , such as finding high quality approximate solutions or locally optimal solutions of the problem in polynomial time .second , we also identify several subclasses of the joint allocation problem which can be solved to global optimality or -global optimality in polynomial time .we therefore clearly delineate the set of computationally tractable problems within the general class of np - hard joint subcarrier and power allocation problems .specifically , we show in this paper that , when there is only a single user in the system or when the number of subcarriers and the number of users are equal to each other , the total transmission power minimization problem is polynomial time solvable ; when there is only a single user , the aforementioned four utility maximization problems are all polynomial time solvable .the rest of this paper is organized as follows . in section [ sec - model ] , we introduce the system model and give the two formulations of the joint subcarrier and power allocation problem for the multi - user ofdma system . in section[ sec - hard ] , we first give a brief introduction to computational complexity theory and then address the computational complexity of the joint subcarrier and power allocation problem .in particular , we show that the aforementioned two formulations of the joint subcarrier and power allocation problem are generally strongly np - hard .several subclasses of the joint allocation problem which are polynomial time solvable are identified in section [ sec - easy ] .finally , the conclusion is drawn in section [ sec - conclusion ] .in this section , we introduce the system model and problem formulation . consider a multi - user ofdma system , where there are users ( transmitter - receiver pairs ) sharing subcarriers . throughout the paper , we assume that ; i.e., the number of subcarriers is greater than or equal to the number of users . otherwise , the ofdma constraint is infeasible .denote the set of users and the set of subcarriers by and , respectively .for any and , suppose to be the symbol that transmitter wishes to send to receiver on subcarrier , then the received signal at receiver on subcarrier can be expressed by where is the channel coefficient from transmitter to receiver on subcarrier and is the additive white gaussian noise ( awgn ) with distribution denoting the power of by ; i.e. , , the received power at receiver on subcarrier is given by where stands for the channel gain from transmitter to receiver on subcarrier treating interference as noise , we can write the sinr of receiver on subcarrier as and transmitter s achievable data rate ( bits / sec ) as in this paper , we consider the joint subcarrier and power allocation problem for the multi - user ofdma system .mathematically , a power allocation vector is said to satisfy the ofdma property if the following equations hold true : the above equations basically say that at most one user is allowed to transmit power on each subcarrier .therefore , the joint subcarrier and power allocation problem for the multi - user ofdma system can be formulated as \text{s.t .} & r_{k}\geq \gamma_{k},~k\in{{\cal k } } , \\[3pt ] & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[3pt ] &p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n } } , \end{array}\ ] ] where the objective function is the total transmission power of all users on all subcarriers , ( ) is the desired transmission rate target of user is the transmission power budget of user on subcarrier , and the last constraint is the ofdma constraint . due to the ofdma constraint , we know for any and in fact , if , the above equality holds trivially ; while if it follows from the ofdma constraint that and thus which shows that the above equality holds as well .thus , problem is equivalent to \text{s.t . } & \displaystyle\sum_{n\in{{\cal n}}}\log_2\left(1+{\dfrac}{\alpha_{k , k}^np_k^{n}}{\eta_k^n}\right)\geq \gamma_{k},~k\in{{\cal k } } , \\[12pt ] &p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[3pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}. \end{array}\ ] ] by introducing a group of binary variables problem can be reformulated as \text{s.t .} & \displaystyle \sum_{n\in{{\cal n}}}\log_2\left(1+{\dfrac}{\alpha_{k , k}^np_k^{n}}{\eta_k^n}\right)\geq \gamma_{k},~k\in{{\cal k } } , \\[12pt ] & x_k^np_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[3pt ] & x_k^n\in\left\{0,1\right\},~k\in{{\cal k}},~n\in{{\cal n}},\\[3pt ] & \displaystyle\sum_{k\in{{\cal k } } } x_k^n\leq 1,~n\in{{\cal n } } , \end{array}\ ] ] where the binary variable if user transmits power on subcarrier or otherwise .the last constraint stands for the ofdma constraint .problem can be dealt with by a two - stage approach . at the first stage, we solve the subcarrier allocation problem ; i.e. , determining the binary variables , which is equivalent to partitioning the set of subcarriers into nonoverlapping groups . at the second stage, we solve the power allocation problem ; i.e. , solving decoupled power allocation problems \text{s.t .} & \displaystyle \sum_{n\in{{\cal n}}_k}\log_2\left(1+{\dfrac}{\alpha_{k , k}^np_k^{n}}{\eta_k^n}\right)\geq \gamma_{k } , \\[14pt ] & p_k^n\geqp_k^n\geq 0,~n\in { \cal n}_k.\\ \end{array}\ ] ] problem at the second stage is convex , and thus is easy to solve . to sum up, the joint subcarrier and power allocation problem can be equivalently formulated as , , or .formulation is intuitive and is easy to understand , whereas formulation is compact and is easy to analyze .the analysis of this paper is mainly based on .besides the total transmission power minimization problem , we also consider the utility maximization problem for the multi - user ofdma system , which can be expressed by \text{s.t . } & \displaystyle \sum_{n\in{{\cal n}}}p_k^{n}\leq p_{k},~k\in{{\cal k } } , \\[12pt ] & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[3pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n } } , \end{array}\]]where denotes the system utility function and is the power budget of transmitter four popular system utility functions are * sum - rate utility : * proportional fairness utility : + * harmonic mean utility : + * min - rate utility : it is simple to see that and the equality holds true if and only if this section , we show that both the total power minimization problem and the system utility maximization problem are intrinsically intractable ( strongly np - hard in the sense of computational complexity theory ) , provided that the ratio of the number of subcarriers and the number of users , that is , is equal to any given constant number . to begin with ,we briefly introduce complexity theory in subsection [ sub - i ]. then we show problems and are strongly np - hard in subsections [ sub - ii ] and [ sub - iii ] , respectively .in computational complexity theory , a problem is said to be np - hard if it is at least as hard as any problem in the class np ( problems that are solvable in nondeterministic polynomial time ) .the np class includes well known problems like the -colorability problem ( which is to check whether the nodes of a given graph can be colored in three colors so that each pair of adjacent nodes are colored differently ) .np - complete problems are the hardest problems in np in the sense that if any np - complete problem is solvable in polynomial time , then each problem in np is solvable in polynomial time . the -colorability problem is np - complete .a problem is strongly np - hard ( strongly np - complete ) if it is np - hard ( np - complete ) and it can not be solved by a pseudo - polynomial time algorithm . an algorithm that solves a problem is called a _pseudo - polynomial _ time algorithm if its time complexity function is bounded above by a polynomial function related to both of the length and the numerical values of the given data of the problem .this is in contrast to the polynomial time algorithm whose time complexity function depends only on the length of the given data of the problem .the -colorability problem is strongly np - complete .however , not all np - hard ( np - complete ) problems are strongly np - hard ( strongly np - complete ) . for instance, the partition problem is np - hard but not strongly np - hard .strongly np - hard or np - hard problems may not be in the class np , but they are at least as hard as any np - complete problem .it is widely believed that there can not exist a polynomial time algorithm to solve any np - complete , np - hard , or strongly np - hard problem ( unless p ) .thus , once an optimization problem is shown to be np - hard , we can no longer insist on having an efficient algorithm that can find its global optimum in polynomial time .instead , we have to settle with less ambitious goals , such as finding high quality approximate solutions or locally optimal solutions of the problem in polynomial time .the standard way to prove an optimization problem is np - hard is to establish the np - hardness of its corresponding feasibility problem or decision problem .the latter is the problem to decide if the global minimum of the optimization problem is below a given threshold or not .the output of a decision problem is either true or false .the feasibility or decision version of an optimization problem is usually in the class np .clearly , the feasibility or decision version of an optimization problem is always easier than the optimization problem itself , since the latter further requires finding the global minimum value and the minimizer .thus , if we show the feasibility or decision version of an optimization problem is np - hard , then the original optimization problem must also be np - hard . in complexity theory , to show a decision problem is np - hard , we usually follow three steps : 1 ) choose a suitable known np - complete decision problem 2 ) construct a _ polynomial time _transformation from any instance of to an instance of 3 ) prove under this transformation that any instance of problem is true if and only if the constructed instance of problem is true .furthermore , if the chosen np - complete problem is strongly np - complete , then problem is strongly np - hard . in the following two subsections ,we show that problems and are strongly np - hard .to analyze the computational complexity of problem , we consider its feasibility problem . if the feasibility problem is strongly np - hard , so is the original optimization problem .* feasibility problem of * . given a set of transmission rate levels individual power budgets per subcarrier , direct - link channel gains and noise powers check whether there exists a subcarrier and power allocation strategy such that & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}. \end{array}\right.\ ] ] to analyze the computational complexity of the feasibility problem , we choose the following strongly np - complete problem ( see ) .* -dimensional matching problem with size .* let and be three different sets with and be a subset of .the -dimensional matching problem is to check whether there exists a match such that the following two conditions are satisfied : * for any two different tripes and we have and * , and next , we first show that any instance of the -dimensional matching problem corresponds to an instance of the transmission rate feasibility problem when .[ basic lemma ] [ thm - feasi]checking the feasibility problem of is strongly np - hard when .thus , problem itself is also strongly np - hard when .assume that .consider any instance of the -dimensional matching problem with and a relationship set we construct a multi - user multi - carrier system where there are users ( which correspond to set ) and subcarriers ( which correspond to set ) .more exactly , and define & { { \cal s}}_2=\left\{(k_x , l_z)\,|\,\left(k_x , j_y , l_z\right)\in { { \cal r}}\right\}.\\ \end{array}\ ] ] for each , the power budgets per subcarrier , the noise powers and the direct - link channel gains are given by 2 , \quad \text{if}~n\in { { { \cal z}}},\\[3pt ] \end{array}\right.\ ] ] 2 , \quad \text{if}~(k , n)\in { { \cal s}}_2;\\[3pt ] 3 , \quad \text{if}~(k , n)\notin { { \cal s}}_1\bigcup { { \cal s}}_2 , \end{array}\right.\ ] ] and 0.25 , \quad \text{if}~(k , n_1,n_2)\notin { { { \cal r } } } , \end{array}\right.\ ] ] respectively . letting the corresponding instance of problem is & 3\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal y}},\\[5pt ] & 2\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal z}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n } } , \end{array}\right.\ ] ] where and are given in and , respectively .we are going to show that the answer to the -dimensional matching problem is yes if and only if the constructed problem is feasible .we first show that if the answer to the -dimensional matching problem is yes , then problem is feasible .in fact , if is a match for the -dimensional matching problem , then a feasible power allocation of problem is given by 2 , \quad \text{if}~n = l_z;\\[3pt ] 0 , \quad \text{if}~n\neq j_y~\text{or}~l_z .\end{array}\right.\ ] ] this is because , since is a match for the -dimensional matching problem , the above power allocation strategy is orthogonal to each other ( i.e. , ) .furthermore , we have for user & \overset{(a)}{=}&\log_2\left(1+{\dfrac}{\alpha_{k_x , k_x}^{j_y}p_{k_x}^{j_y}}{\eta_{k_x}^{j_y}}\right)+\log_2\left(1+{\dfrac}{\alpha_{k_x , k_x}^{l_z}p_{k_x}^{l_z}}{\eta_{k_x}^{l_z}}\right)\\[20pt ] & \overset{(b)}{=}&\log_2\left(1+{\dfrac}{1 * 3}{1}\right)+\log_2\left(1+{\dfrac}{1 * 2}{2}\right)\\[10pt ] & = & 3,\end{array}\ ] ] where is due to and is due to , .so is a feasible power allocation of problem . on the other hand , we show that if all the constraints in are satisfied , the answer to the -dimensional matching problem must be yes .notice that for any user , if the transmission rate , it must transmit on at least two subcarriers , for otherwise if it transmits only on one subcarrier , its maximum transmission rate is since there are users and subcarriers and at most one user is allowed to transmit on each subcarrier ( by the ofdma constraint ) , the feasibility of asks that each user in the network must transmit on _ exactly _ two subcarries . by the construction of the parameters of the system, one can verify that the corresponding direct - link channel gains of the user on the two subcarriers must be , for otherwise the transmission rate is at most furthermore , the fact that all users transmission rate requirements are satisfied implies that each user transmits on one subcarrier in with noise power and one in with noise power which makes the transmission rate equal to otherwise , * if one user transmits on two subcarries in the transmission rate is at most * if one user transmits on two subcarries in both with noise power , then at least one user will transmit on two subcarries in either with noise power or and the transmission rate of this user is at most therefore , problem is feasible if and only if that for all users there exist and such that and according to the construction of and , we see that is a match for the -dimensional matching problem .it is simple to check that the above transformation from the -dimensional matching problem to the feasibility problem can be done in polynomial time .since the -dimensional matching problem is strongly np - complete , we conclude that checking the feasibility of problem is strongly np - hard .therefore , the optimization problem is also strongly np - hard . to illustrate the above proof , we take the following -dimensional matching problem as an example , and it is simple to check that is a match for the above given instance of the -dimensional matching problem .based on this -dimensional matching problem , we construct a -user -carrier system with and . according to, we have and the proof of lemma [ thm - feasi ] suggests the following system parameters ( cf . ) : all power budgets per subcarrier are noise powers are except and all direct - link channel gains are except \alpha_{3_x,3_x}^{2_y}=\alpha_{3_x,3_x}^{2_z}=\alpha_{3_x,3_x}^{4_y}=\alpha_{3_x,3_x}^{3_z}=\alpha_{4_x,4_x}^{3_y}=\alpha_{4_x,4_x}^{1_z}=1 .\end{array}\ ] ] in this example , based on the match , we can construct an ofdma solution to the corresponding feasibility check problem , i.e. , \left(p_{2_x}^{1_y},p_{2_x}^{2_y},p_{2_x}^{3_y},p_{2_x}^{4_y},p_{2_x}^{1_z},p_{2_x}^{2_z},p_{2_x}^{3_z},p_{2_x}^{4_z}\right)=(3,0,0,0,0,2,0,0),\\[5pt ] \left(p_{3_x}^{1_y},p_{3_x}^{2_y},p_{3_x}^{3_y},p_{3_x}^{4_y},p_{3_x}^{1_z},p_{3_x}^{2_z},p_{3_x}^{3_z},p_{3_x}^{4_z}\right)=(0,0,0,3,0,0,2,0),\\[5pt ] \left(p_{4_x}^{1_y},p_{4_x}^{2_y},p_{4_x}^{3_y},p_{4_x}^{4_y},p_{4_x}^{1_z},p_{4_x}^{2_z},p_{4_x}^{3_z},p_{4_x}^{4_z}\right)=(0,0,3,0,2,0,0,0 ) .\end{array}\ ] ] on the other hand , to look for a feasible solution of problem , we have to make each user transmit on two subcarriers ( one with noise power and the other with noise power ) and the direct - link channel gains of the user on the corresponding two subcarriers be notice that and we can ask user to transmit on subcarriers and user to transmit on subcarriers and user to transmit on subcarriers and and user to transmit on subcarriers and , respectively .consequently , is a match for the given instance of the -dimensional matching problem .lemma [ thm - feasi ] shows that checking the feasibility of problem is strongly np - hard when based on this basic result , we can further prove that it is strongly np - hard to check the feasibility of problem when provided that is a strictly greater than one constant .we summarize this result as theorem [ thm - feasi2 ] , and relegate its proof to appendix [ app - thm - feasi2 ] .[ thm - feasi2 ] given any constant checking the feasibility of problem is strongly np - hard when .thus , problem itself is also strongly np - hard .remark 1 : problem remains strongly np - hard if the per - subcarrier power budget constraints there are replaced by the total power constraints or by setting or and using the same argument as in the proof of lemma [ thm - feasi ] and theorem [ thm - feasi2 ] , the strong np - hardness of the corresponding feasibility problems can be shown .in fact , all strong np - hardness results of problems and in this paper also hold true for problems with either of the above two total power constraints .remark 2 : another extension of problem is the so - called joint subcarrier and bit allocation problem . the goal of the joint subcarrier and bit allocation problem is to allocate subcarriers to users and at the same time allocate transmission bits to each user - subcarrier pair such that the total transmission power is minimized and the ofdma constraints and all users transmission requirements are satisfied .mathematically , the problem can be formulated as \text{s.t .} & \displaystyle \sum_{n\in{{\cal n } } } r_k^n\geq \gamma_k,~k\in{{\cal k}},\\[15pt ] & \displaystyle p_k^n={(2^{r_k^n}-1)\eta_k^n}/{\alpha_{k , k}^n},~k\in{{\cal k}},~n\in{{\cal n } } , \\[5pt ] & r_k^n\in\left\{r_1,r_2, ... ,r_m,0\right\},~k\in{{\cal k}},~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neqk,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}. \end{array}\]]the second constraint in says that ( cf . )is necessary for user transmitting on subcarrier to achieve transmission rate , and the third constraint in enforces to take values in the possible transmission rate set given a total power budget , the decision version of problem is to ask whether there exists a feasible power and bit allocation strategy such that the optimal value of is less than or equal to it was shown in that the decision version of problem is strongly np - hard when .the proof is based on a polynomial time transformation from the scheduling problem . in fact , by setting and using the same argument as in lemma 3.1 and theorem 3.1 , we can also show the strong np - hardness of the decision version of problem . in this subsection , we study the computational complexity of the system utility maximization problem for the multi - user ofdma system , where the utility is one of the four system utility functions introduced in section [ sec - model ] .theorem [ thm - uti2 ] establishes the strong np - hardness of the problem for the general case .the proof of theorem [ thm - uti2 ] can be found in appendix [ app - thm - uti2 ] .[ thm - uti2]given any constant the system utility maximization problem with or is strongly np - hard when remark 3 : the result in theorem [ thm - uti2 ] is different from the one in .it was shown in that the sum - rate utility maximization problem ( i.e. , ) is np - hard when the number of users is equal to the proof is based on a polynomial time transformation from the equipartition problem , which is known to be np - complete but not strongly np - complete .theorem [ thm - uti2 ] shows that the sum - rate utility maximization problem is _ strongly _ np - hard .the other three utility functions ( or ) are not considered in . the reference proved the np - hardness of the three utility maximization problems ( or ) in the two - user case by establishing a polynomial time transformation from the equipartition problem .however , the proof of is nonrigorous , since the given equipartition problem has a yes answer does not imply that the transmission rate of the two users ( at the solution of the corresponding utility maximization problem ) is equal to each other , and vice versa .the complexity status of problem with the proportional fairness utility , the harmonic mean utility , and the min - rate utility remains unknown when is fixed . in this section, we have shown that both the total power minimization problem and the system utility maximization problem are strongly np - hard .the basic idea of the proof is establishing a polynomial time transformation from the -dimensional matching problem to the decision version of problems and .the complexity result suggests that there are not polynomial time algorithms which can solve problems and to global optimality ( unless p ) .therefore , one should abandon efforts to find globally optimal subcarrier and power allocation strategy for problems and , and determining an approximately optimal subcarrier and power allocation strategy is more realistic in practice .in this section , we identify some easy cases when problem or problem can be solved in polynomial time . before doing this, we introduce a concept called _ strong _ polynomial time algorithm .a problem is said to admit a strong polynomial time algorithm if there exists an algorithm satisfying the following two conditions : * the complexity of the algorithm ( when applied to solve the problem ) depends only on the dimension of the problem and is a polynomial function of the dimension ; * the algorithm solves the problem to global optimality ( not just -global optimality ) .we remark that so far we do not know whether there exists a strong polynomial time algorithm to solve the general linear programming . when the interior - point algorithm is applied to solve the linear programming , it is only guaranteed to return an -optimal solution in polynomial time and the complexity of the interior - point algorithm depends on the factor .the best complexity results of solving the linear programming is still related to the condition number of the constraint matrix . in the following subsections , we identify four ( strong ) polynomial time solvable subclasses of problems and .more specifically , we first show that both problem and problem are ( strongly ) polynomial time solvable when there is only one user in the system ; see subsection [ sub - k=1 ] and subsection [ subsec - k=1 ] , respectively .the ( extended ) `` water - filling '' technique plays a fundamental role in proving the polynomial time complexity .then , we show in subsection [ subsecn = k ] that problem is strongly polynomial time solvable when the number of subcarriers is equal to the number of users . in this case, we can reformulate problem as an assignment problem for a complete bipartite graph , which can be solved in strong polynomial time .finally , we show the polynomial time complexity of problem with sum - rate utility without the total power constraint by transforming it into the polynomial time solvable hitchcock problem in subsection [ subsec - nopower ] .when there is only one user ( i.e. , ) , problem becomes \text{s.t .} & \displaystyle\sum_{n\in{{\cal n}}}\log_2\left(1+{\dfrac}{\alpha^np^{n}}{\eta^n}\right)\geq \gamma , \\[15pt ] & p^n\geq p^n\geq 0,~n\in{{\cal n}}. \end{array}\ ] ] we claim that solving problem is equivalent to finding a minimal such that the optimal value of problem \text{s.t . } & \displaystyle\sum_{n\in{{\cal n}}}p^n\leq p , \\[15pt ] & p^n\geq p^n\geq 0,~n\in{{\cal n}}\end{array}\ ] ] is equal to this is an important observation towards obtaining the closed - form solution of problem .in fact , if the optimal value of problem is , then the optimal value of problem is and vice versa . in more details , for any fixed ,problem is strictly convex with respect to and hence has a unique solution .therefore , the set defined by & \displaystyle\sum_{n\in{{\cal n}}}p^n\leq { p^ * } , \\[15pt ] & p^n\geq p^n\geq 0,~n\in{{\cal n}}\end{array}\right.\ ] ] contains only one point , which must be the solution of problem .hence , problems and share the same solution .the solution to problem or problem is given by the following extended water - filling solution _ is chosen such that the objective value of problem is equal to _ and after obtaining the optimal , the optimal value of problem is given by for completeness , we show in appendix [ app-2 ] that indeed is the solution to problem and is actually the lagrangian multiplier corresponding to the constraint .we point out that the water - filling solution extends the conventional water - filling solution in in the following two respects .first , the conventional water - filling solution solves the power control problem without the power budget constraints per subcarrier , while the power control problem not only involves the total power constraint but also involves the power budget constraints per subcarrier .second , the parameter in the conventional water - filling solution is chosen such that while the parameter in is chosen such that the only left problem now is to find in such that the objective value of problem is equal to a natural way to find the desired is to perform a binary search on since the objective function of is an increasing function with respect to as is known , the efficiency of the binary search depends on the initial search interval of to derive a good lower and upper bound , we first order the sequence and without loss of generality , suppose that notice that it takes operations to order then we calculate the objective values of problem at denoting them by .it follows from the monotonicity of that if there exists an index such that then otherwise , we have for some and hence in this case , we can start the binary search from and it takes at most iterations to obtain an -optimal assume that where is a sufficiently large constant .then , we can conclude that problem is polynomial time solvable and the worst case complexity of solving it is remark 4 : the complexity status of the total power minimization problem remains unknown when is fixed .if the system has only one user , all the four system utility functions coincide and problem becomes \text{s.t .} & \displaystyle \sum_{n\in{{\cal n}}}p^n\leq p , \\[15pt ] & p^n\geq p^n\geq 0,~n\in{{\cal n}}. \end{array}\ ] ] the main difference between problem and problem lies in that , is a given constant in problem , but is an unknown parameter in problem .this feature of in problem enables us to design a strong polynomial time algorithm for the problem . without loss of generality , we assume that the parameters in problem satisfy otherwise , the solution to problem is for all notice that the solution to problem is given by the extended water - filling solution , where is chosen such that the desired here can be found by solving a univariate linear equation .in fact , in a similar fashion as in subsection [ sub - k=1 ] , we can order in and assume that holds .then we calculate the total transmission power at , denoting them by .it follows from the monotonicity of that if there is an index such that , then .otherwise , we have that for some . then for each we have = & \left\{\begin{array}{ll } p^n , \quad \text{if}~{\dfrac}{\eta^n}{\alpha^n}+p^n\leq b^{n^*};\\[10pt ] { \tau}-{\dfrac}{\eta^n}{\alpha^n } , \quad \text{if}~{\dfrac}{\eta^n}{\alpha^n } < b^{n^*}<{\dfrac}{\eta^n}{\alpha^n}+p^n;\\[10pt ] 0 , \quad \text{if}~{\dfrac}{\eta^n}{\alpha^n}\geq b^{n^*}. \end{array}\right . \end{aligned}\ ] ] therefore , the problem of finding the desired reduces to solve a univariate linear equation in terms of and the desired is obtained in a closed form . from the above discussion , we see that the complexity of finding the desired is remark 5 : the reference has addressed problem , but without power budget constraints per subcarrier .the reference has also shown that the sum - rate maximization problem is np - hard when when the number of users is equal to the number of subcarriers ( i.e. , ) , we transform problem into an assignment problem for a complete bipartite graph with nodes in polynomial time .specifically , we construct the bipartite graph with * the node set where and correspond to the set of users and the set of subcarriers , respectively ; * the edge set with the weight of edge \displaystyle\sum_{k\in{{\cal k}}}\sum_{n\in{{\cal n}}}p_k^n , \quad \text{otherwise}. \end{array}\right.\ ] ] therefore , problem can be equivalently reformulated as \text{s.t . }& \displaystyle \sum_{k\in{{\cal k } } } x_{k}^n=1,~n\in{{\cal n } } , \\[12pt ] & \displaystyle \sum_{n\in{{\cal n } } } x_{k}^n=1,~k\in{{\cal k}},\\[12pt ] & x_{k}^n\in\left\{0,1\right\},~k\in{{\cal k}},~n\in{{\cal n}}. \end{array}\ ] ] in the above problem , the binary variable is equal to if user transmits power on subcarrier and otherwise .the first constraint stands for the ofdma constraint , which requires that at most one user is allowed to transmit power on each subcarrier .the second constraint requires that each user must transmit on one subcarrier to satisfy its specified transmission rate requirement . from (* ? ? ? * theorem 11.1 ) , we know that the hungarian method solves problem to global optimality in operations .the hungarian method is in essence a primal - dual simplex algorithm for solving the linear program \text{s.t . }& \displaystyle \sum_{k\in{{\cal k } } } x_{k}^n=1,~n\in{{\caln } } , \\[12pt ] & \displaystyle \sum_{n\in{{\cal n } } } x_{k}^n=1,~k\in{{\cal k}},\\[12pt ] & 0\leq x_{k}^n\leq 1,~k\in{{\cal k}},~n\in{{\cal n } } , \end{array}\]]which is a relaxation of problem by replacing with in general , the primal - dual simplex method is not a polynomial time algorithm .when it is used to solve problem , however , it can return the _ global optimal integer solution _ of problem in operations .we remark that if the optimal value of problem satisfies the optimal solution to is otherwise , we will have that indicates that problem is infeasible .remark 6 : in fact , problem is also polynomial time solvable when where is a given constant integer .specifically , we can first partition subcarriers into nonempty subsets denote the number of ways to partition subcarriers into nonempty subsets by we show in appendix [ app - s ] that is upper bounded by after the partition of the subcarriers , the problem reduces to the case the only difference here is that the parameter is set to be \text{s.t . } & \displaystyle \sum_{l\in{{\cal n}}_n}\log_2\left(1+{\dfrac}{\alpha_{k , k}^lp_k^{l}}{\eta_k^l}\right)\geq \gamma_{k } , \\[15pt ] & p_k^l\geq p_k^l\geq0,~l\in { \cal n}_n\\ \end{array}\ ] ] if the above problem is feasible ; otherwise we know from subsection [ sub - k=1 ] that problem is polynomial time solvable under a mild assumption .actually , if problem is feasible and only contains a singleton , we have which is the same as the one in .then , for a given partition we can use the hungarian method to solve problem in operations , and assume the optimal value to be therefore , the optimal value of the original problem is if it is strictly less than otherwise the original problem is infeasible .it follows from the above analysis that problem with can be solved in operations .if we drop the total power constraint then problem with becomes & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}. \end{array}\ ] ] problem is always feasible as ( for all and ) is a certificate for the feasibility .further , since problem does not involve the total power constraint for each user , its optimal solution is either or to solve problem in polynomial time , we consider transforming it into the polynomial time solvable hitchcock problem ( also known as transportation problem ) . * hitchcock problem*. suppose there are sources of some commodity , each with a supply of units , and terminals , each of which has a demand of units .suppose that the unit cost of transporting the commodity from source to terminal is the problem is how to satisfy the demands at a minimal cost ?in fact , by setting where we can see that problem is equivalent to the following hitchcock problem \text{s.t . } & \sum_{k=1}^k x_k^n=1,~n=1,2, ... ,n,\\[5pt ] & \sum_{k=1}^k x_k^{n+1}=(k-1)n,\\[5pt ] & \sum_{n=1}^n x_k^n = n,~k=1,2, ... ,k,\\[5pt ] & x_k^n=\left\{0,1\right\},~k=1,2, ...,k,~n=1,2, ... ,n .\end{array}\ ] ] in the above problem , the binary variable if user transmits full power on subcarrier and if user does not transmit any power on subcarrier .the constraint and the constraint implies that at most one user is allowed to transmit power on each subcarrier .however , one user is allowed to transmit on multiple subcarriers .the variables are auxiliary dummy variables , since for all moreover , we know from ( * ? ? ?* theorem 13.3 and its corollary ) that problem is equivalent to the linear program \text{s.t . }& \sum_{k=1}^k x_k^n=1,~n=1,2, ... ,n,\\[5pt ] & \sum_{k=1}^k x_k^{n+1}=(k-1)n,\\[5pt ] & \sum_{n=1}^n x_k^n = n,~k=1,2, ... ,k,\\[5pt ] & 0\leq x_k^n\leq 1,~k=1,2, ... ,k,~n=1,2, ... ,n .\end{array}\]]the equivalence between problems and is because the linear equality constraint in satisfies the so - called _ totally unimodular _property , and thus all the vertices of the feasible set of problem are integer . since the linear program is polynomial timesolvable , the sum - rate maximization problem is polynomial time solvable .further , suppose is the optimal solution to problem , then the optimal solution to problem is it is worthwhile remarking that the so - called -algorithm ( * ? ? ?* section 7.4 ) can efficiently solve problem , although it is not a polynomial time algorithm .remark 7 : in a similar fashion , we can show that the _ weighted _ sum - rate maximization problem \text{s.t . } & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\calk}},}~n\in{{\cal n}}\end{array}\]]is also polynomial time solvable , where are non - negative weights . again , we can transform problem into the hitchcock problem in polynomial time .the corresponding parameters of the hitchcock problem are the same as before , except for and we summarize the main results in this section as the following theorem .the following statements are true .* problem is strongly polynomial time solvable when the number of users is equal to the number of subcarriers .* problem is polynomial time solvable when there is only one user in the system .* problem is strongly polynomial time solvable when there is only one user in the system . *the ( weighted ) sum - rate maximization problem without the total power constraint is polynomial time solvable . in this section ,we have identified four subclasses of problems and which are ( strongly ) polynomial time solvable . by doing so, we successfully pick a subset of computationally tractable problems within the general class of strongly np - hard joint subcarrier and power allocation problems .dynamic allocation of subcarrier and power resources in accordance with channel and traffic load changes can significantly improve the network throughput and spectral efficiency of the multi - user multi - carrier communication system where a number of users share some common discrete subcarriers .a major challenge associated with joint subcarrier and power allocation is to find , for a given channel state , the globally optimal subcarrier and power allocation strategy to minimize the total transmission power or maximize the system utility function .this paper mainly studies the computational challenges of the joint subcarrier and power allocation problem for the multi - user ofdma system .we have shown that the general joint subcarrier and power allocation problem for the multi - user ofdma system is strongly np - hard .the complexity result suggests that we should abandon efforts to find globally optimal subcarrier and power allocation strategy for the general multi - user ofdma system unless for some special cases ( i.e. , the case when there is only one user in the system , or the case when the number of users is equal to the number of subcarriers ) .the problem is shown to be ( strongly ) polynomial time solvable in these special cases . in a companion paper, we shall design efficient algorithms to solve the joint subcarrier and power allocation problem .the first author wishes to thank professor zhi - quan ( tom ) luo of university of minnesota for inviting him to visit xidian university , where professor luo organized a summer seminar on optimization and its applications to signal processing and communications from may 21 to august 19 , 2012 .this work was started from then .the first author also would like to thank dr .peng liu at xidian university and dr .qiang li at chinese university of hong kong for many useful discussions .before going into very details , let us first give a high level preview of the proof .the basic idea of proving the strong np - hardness of problem with is to reduce it to the one with more specifically , we first partition all users into two types ( type - i and type - ii ) and also all subcarriers into two types ( type - i and type - ii ) , where the number of type - i subcarriers is required to be twice as large as the number of type - i users .then , we construct `` good '' channel parameters between type - i ( type - ii ) users and type - i ( type - ii ) subcarriers while `` bad '' channel parameters between type - i ( type - ii ) users and type - ii ( type - i ) subcarriers such that the only way for all users to satisfy their transmission rate requirements is that type - i ( type - ii ) users will only transmit power on type - i ( type - ii ) subcarriers .in addition , in our construction , all type - ii users and type - ii subcarriers are dummy ones , since all type - ii users transmission rate requirements can easily be satisfied by transmitting full power on type - ii subcarriers . in this way , the problem of checking the feasibility of problem with reduces to the one of checking whether all type - i users transmission rate targets can be met or not , where the number of type - i subcarriers is twice larger than the number of type - i users ( ) as required in the partition .below is the detailed proof of theorem [ thm - feasi2 ] . by lemma [ thm - feasi ], we only need to consider the following two cases : ( i ) and ( ii ) . in case ( i ) , we partition users into type - i users and type - ii users , and subcarriers into type - i subcarriers and type - ii subcarriers .we construct the channel parameters of type - i users on type - i subcarriers in the same way as in the proof of lemma [ thm - feasi ] where .furthermore , noise powers , direct - link channel gains , and power budgets of all type - ii users on type - ii subcarriers are set to and respectively ; these parameters of all type - i users on type - ii subcarriers and all type - ii users on type - i subcarriers are set to and respectively .all type - ii users desired transmission rate targets are set to be see fig .[ plot ] for the corresponding ofdma system .our construction is such that the channel condition of the type - i ( type - ii ) users on the type - i ( type - ii ) subcarriers is reasonably better than the one of the type - i ( type - ii ) users on the type - ii ( type - i ) subcarriers .one can check that the only possible way for all users to meet their transmission rate targets is that , each type - ii user transmits full power on ( any ) one type - ii subcarrier ( actually type - ii users and subcarriers are dummy users and subcarriers ) and all type - i users appropriately transmit power on type - i subcarriers . by lemma [ thm - feasi ] , however , checking whether all type - i users transmission rate requirements can be satisfied is strongly np - hard . where .noise powers , direct - link channel gains , power budgets of type - ii user(s ) on type - ii subcarriers are set to and respectively ; these parameters of type - i users on type - ii subcarriers and type - ii user(s ) on type - i subcarriers are set to and respectively . for case ( i ) , the transmission rate targets of type - ii users are all set to for case ( ii ) , the transmission rate target of the single type - ii user is set to ,width=366 ] in case ( ii ) , we partition users into type - i users and type - ii user , and subcarriers into type - i subcarriers and type - ii subcarriers .we construct the channel parameters of type - i users on type - i subcarriers in the same way as in the proof of lemma [ thm - feasi ] where moreover , noise powers , direct - link channel gains , and power budgets of the single type - ii user on type - ii subcarriers are set to and respectively ; these parameters of all type - i users on type - ii subcarriers and the single type - ii user on type - i subcarriers are set to and respectively .the transmission rate of the single type - ii user is required to be not less than see fig .[ plot ] for the corresponding system . due to special construction of the system, one can check that the only way for all users to meet their transmission rate targets is that , the single type - ii user transmits full power on all type - ii subcarriers and all type - i users appropriately transmit power on type - i subcarriers .again , by lemma [ thm - feasi ] , checking whether all type - i users transmission rate requirements can be satisfied is strongly np - hard .we first prove the strong np - hardness of problem for the special case and then prove their strong np - hardness for the general case * we first consider the case .* for any instance of the -dimensional matching problem , we construct the same system as in the proof of lemma [ thm - feasi ] and set * strong np - hardness of problem with * lemma [ thm - feasi ] directly implies that the following problem \text{s.t . } & \displaystyle \sum_{n\in{{\cal n}}}p_k^{n}\leq p_{k},~k\in{{\cal k } } , \\[12pt ] & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}\end{array}\ ] ] is strongly np - hard , since the problem of checking whether its optimal value is greater than or equal to is strongly np - hard . * strong np - hardness of problem with * we prove that the sum - rate maximization problem \text{s.t . } & \displaystyle \sum_{n\in{{\cal n}}}p_k^{n}\leq p_{k},~k\in{{\cal k } } , \\[12pt ] & p_k^n\geq p_k^n\geq 0,~k\in{{\cal k}},~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,{~k,\,j\in{{\cal k}},}~n\in{{\cal n}}\end{array}\ ] ] is also strongly np - hard . in particular , we show that checking the optimal value of problem is greater than or equal to is strongly np - hard , where is the transmission rate of user at the solution of problem . to this aim , consider the following relaxation of problem , \text{s.t . } & \displaystyle \sum_{k\in{{\cal k}}}\sum_{n\in{{\cal n}}}p_k^{n}\leq 5k , \\[13pt ] & p_k^n\geq 0,~k\in{{\cal k}},{~k,\,j\in{{\cal k}},}~n\in{{\cal n}},\\[5pt ] & p_k^np_j^n=0,~\forall~j\neq k,~n\in{{\cal n } } , \end{array}\ ] ] where due to the ofdma constraint , we see that problem is equivalent to \text{s.t . } & \displaystyle \sum_{n\in{{\cal n}}}p^{n}\leq 5k,\\[12pt ] & p^n\geq 0,~n\in{{\cal n}}. \end{array}\ ] ] noticing that problem is convex , we can obtain its optimal solution and its optimal value therefore , the optimal value of the original problem is less than or equal to and the equality holds if and only if the latter holds if and only if the answer to the -dimensional matching problem is yes . therefore , the optimal value of problem is greater than or equal to if and only if the answer to the -dimensional matching problem is yes .* strong np - hardness of problem with and * for the cases and , notice that for all , , , and the equalities hold if and only if therefore , the optimal value of or is greater than or equal to if and only if the answer to the -dimensional matching problem is yes .this implies the strong np - hardness of all the four utility maximization problems with .now we consider the general case we show the strong np - hardness of problem for the general case by constructing some dummy users and subcarriers as in the proof of theorem [ thm - feasi2 ] .take the sum - rate maximization problem as an example .it is simple to check that * in case the sum - rate utility function of the constructed system is greater than or equal to if and only if the given instance of the -dimensional matching problem with size has a positive answer ; * in case the sum - rate utility function of the constructed system is greater than or equal to if and only if the given instance of the -dimensional matching problem with size has a positive answer . hence , the sum - rate maximization problem in the case that is strongly np - hard .similar results also hold true for the other three utility functions .we omit the proof for brevity .to show is the solution to problem , let us first write down the kkt condition of problem .suppose are the lagrangian multipliers corresponding to the constraints and respectively , then the kkt condition of problem is given as follows & \displaystyle \sum_{n\in{{\cal n}}}p^n\leq p,~\lambda\geq 0,~\lambda\left(\sum_{n\in{{\cal n}}}p^n - p\right)=0,\\[15pt ] & p^n\geq p^n,~\xi^n\geq 0,~(p^n - p^n)\xi^n=0,~n\in{{\cal n}},\\[6pt ] & p^n\geq 0,~\nu^n\geq0,~p^n\nu^n=0,~n\in{{\cal n}}. \end{array}\right.\ ] ] since problem is convex for any fixed , it follows that the kkt condition is necessary and sufficient for to solve problem .therefore , to show is the solution to problem , it suffices to show that there exist appropriate lagrangian multipliers such that satisfy the kkt system .next , we construct appropriate such that in together with the constructed satisfy all the conditions in . specifically , we choose such that the objective value of problem is equal to and * if in , then we set and * if in , then we set * if in , then set and it is simple to check constructed in the above satisfies the kkt system . hence , is the solution to problem .in combinatorics , a stirling number of the second kind , denoted by is the number of ways to partition a set of objects into nonempty subsets , where .stirling numbers of the second kind obey the following recursive relation with initial conditions and for any to understand this formula , observe that a partition of objects into nonempty subsets either contains the -th object as a singleton ( which corresponds to the term in ) or contains it with some other elements ( which corresponds to the term in ) .next , we claim by induction that in fact , when we obtain this is because that dividing elements into sets means dividing it into one set of size and sets of size therefore , we only need to pick those two elements .assume that we show that by invoking , we have = & s(k+c-1,k-1)+k{s(k+c-1,k)}\\[3pt ] \leq&s(k+c-1,k-1)+k{\left(k+c-1\right)}^{2(c-1)}\\[3pt ] \leq&s(k+c-1,k-1)+{\left(k+c\right)}^{2c-1}\\[3pt ] \leq&\sum_{k=1}^k \left(k+c\right)^{2c-1}\\[3pt ] \leq&(k+c)^{2c } , \end{array}\]]which shows that holds true .h. dahrouj and w. yu , `` coordinated beamforming for the multicell multi - antenna wireless system , '' _ ieee trans .wireless commun ._ , vol . 9 , no .. 17481759 , may 2010 . y .-f . liu , y .- h .dai , and z .- q .luo , `` coordinated beamforming for miso interference channel : complexity analysis and efficient algorithms , '' _ ieee trans . signal process .3 , pp . 11421157 , mar .2011 .f. bernardo , r. agust , j. cordero , and c. crespo , `` self - optimization of spectrum assignment and transmission power in ofdma femtocells , '' in _ proc .advanced international conference on telecommunications _ , may , 2010 , pp . 404409 .i. c. wong and b. l. evans , `` optimal downlink ofdma resource allocation with linear complexity to maximize ergodic rates , '' _ ieee trans .wireless commun ._ , vol . 7 , no .962971 , mar .2008 . s. hamouda , s. tabbane , and p. godlewski , `` improved reuse partitioning and power control for downlink multi - cell ofdma systems , '' in _ proc . international workshop on broadband wireless access for ubiquitous networking , _ sept .2006 . l. m. c. hoo , b. halder , j. tellado , and j. m. cioffi , `` multiuser transmit optimization for multicarrier broadcast channels : asymptotic fdma capacity region and algorithms , '' _ ieee trans . commun .52 , no . 6 , pp . 922930 , june 2004 .c. y. wong , r. s. cheng , k. b. lataief , and r. d. murch , `` multiuser ofdm with adaptive subcarrier , bit , and power allocation , '' _ ieee j. sel .areas in commun . , _ vol .10 , pp . 17471758 ,1999 .s. v. hanly , l. l. h. andrew , and t. thanabalasingham , `` dynamic allocation of subcarriers and transmit powers in an ofdma cellular network , '' _ ieee trans .theory , _ vol .54455462 , dec . 2009 . c. mohanram and s. bhashyam , `` a sub - optimal joint subcarrier and power allocation algorithm for multiuser ofdm , '' _ ieee commun .lett . , _ vol . 9 , no . 8 , pp . 685687 , aug .k. e. baamrani , a. a. ouahman , v. p. g. jimnez , a. g. armada , and s. allaki , `` subcarrier and power allocation for the downlink of multiuser ofdm transmission , '' _ wireless personal communications , _ vol .457465 , dec . 2006 .l. tang , h. wang , q. chen , and g. m. liu , `` subcarrier and power allocation for ofdm - based cognitive radio networks , '' in _ proc .ieee international conference on communications technology and applications , _ oct .2009 , pp . 457461 .m. s. alam , j. w. mark , and x. shen , `` relay selection and resource allocation for multi - user cooperative ofdma networks , '' _ ieee trans .wireless commun . ,5 , pp . 21932205 , may 2013 . m. dong , q. yang , f. fu , and k. s. kwak , `` joint power and subchannel allocation in relay aided multi - cell ofdma networks , '' in _ proc . international symposium on communications and information technologies , _ oct .2011 , pp .. g. huang , j. he , and q. zhang , `` research on adaptive subcarrier - bit - power allocation for ofdma , '' in _ proc .international conference on wireless communications , networking and mobile computing , _ sept .2009 , pp .g. a. s. sidhu , f. gao , and a. nallanathan , `` a joint resource allocation scheme for multi - relay aided uplink multi - user ofdma system , '' in _ proc . international conference on wireless communications and signal processing , _ oct .2010 , pp .j. li , x. chen , c. botella , t. svensson , and t. eriksson , `` resource allocation for ofdma systems with multi - cell joint transmission , '' in _ proc .ieee international workshop on signal processing advances in wireless communications , _ june 2012 , pp .b. danobeitia , g. femenias , and f. riera - palou , `` an optimization framework for scheduling and resource allocation in multi - stream heterogeneous mimo - ofdma wireless networks , '' _ ifip , wireless days , _ nov .2012 , pp . 1 - 3 .f. wu , y. mao , x. huang , and s. leng , `` a joint resource allocation scheme for ofdma - based wireless networks with carrier aggregation , '' in _ ieee wireless communications and networking conference , _ apr .2012 , pp . 12991304 .m. belleschi , p. detti , and a. abrardo , `` complexity analysis and heuristic algorithms for radio resource allocation in ofdma networks , '' in _ proc .international conference on telecommunications _ , may 2011 , pp .. w. jiang , z. zhang , x. sha , and l. sun , `` low complexity hybrid power distribution combined with subcarrier allocation algorithm for ofdma , '' _j. systems engineering and electronics , _ vol .22 , no . 6 , pp .879884 , dec .r. o. afolabi , a. dadlani , and k. kim , `` multicast scheduling and resource allocation algorithms for ofdma - based systems : a survey , '' _ ieee communications surveys & tutorials , _ vol .240254 , first quarter 2013 .k. sumathi and m. l. valarmathi , `` resource allocation in multiuser ofdm systems a survey , '' in _ proc .international conference on computing communication & networking technologies , _july 2012 , pp .liu , m. hong , and y .- h .dai , `` max - nin fairness linear transceiver design problem for a multi - user simo interference channel is polynomial time solvable , '' _ ieee signal process .2730 , jan . 2013 .luo and s. zhang , `` duality gap estimation and polynomial time approximation for optimal spectrum management , '' _ ieee trans .signal process ._ , vol .57 , no .7 , pp . 26752689 , jul . 2009. w. yu and r. lui , `` dual methods for nonconvex spectrum optimization of multicarrier systems , '' _ ieee trans .54 , no . 7 , pp .13101322 , july 2006 .r. cendrillon , m. moonen , j. verliden , and t. bostoen , and w. yu , `` optimal multiuser spectrum management for digital subscriber lines , '' in _ proc .ieee international conference on communications _ ,june 2004 , vol .1 , pp . 15 .t. m. cover and j. a. thomas , _ elements of information theory , _new york , u.s.a . :john wiley & sons , inc .a. ben - tal and a. nemirovski , _ lectures on modern convex optimization ._ philadelphia , u.s.a .: siam - mps series on optimization , siam publications , 2001 .s. a. vavasis and y. ye , a primal - dual interior point method whose running time depends only on the constraint matrix , " _ math . prog .74 , pp . 79120 , 1996 .a. mohr and t. d. porter , `` applications of chromatic polynomials involving stirling numbers , '' _ journal of combinatorial mathematics and combinatorial computing , _ vol .5764 , 2009 . | consider a multi - user orthogonal frequency division multiple access ( ofdma ) system where multiple users share multiple discrete subcarriers , but at most one user is allowed to transmit power on each subcarrier . to adapt fast traffic and channel fluctuations and improve the spectrum efficiency , the system should have the ability to dynamically allocate subcarriers and power resources to users . assuming perfect channel knowledge , two formulations for the joint subcarrier and power allocation problem are considered in this paper : the first is to minimize the total transmission power subject to quality of service constraints and the ofdma constraint , and the second is to maximize some system utility function ( including the sum - rate utility , the proportional fairness utility , the harmonic mean utility , and the min - rate utility ) subject to the total transmission power constraint per user and the ofdma constraint . in spite of the existence of various heuristics approaches , little is known about the computational complexity status of the above problem . this paper aims at filling this theoretical gap , i.e. , characterizing the complexity of the joint subcarrier and power allocation problem for the multi - user ofdma system . it is shown in this paper that both formulations of the joint subcarrier and power allocation problem are strongly np - hard . the proof is based on a polynomial time transformation from the so - called -dimensional matching problem . several subclasses of the problem which can be solved to global optimality or -global optimality in polynomial time are also identified . these complexity results suggest that there are not polynomial time algorithms which are able to solve the general joint subcarrier and power allocation problem to global optimality ( unless p ) , and determining an approximately optimal subcarrier and power allocation strategy is more realistic in practice . computational complexity , power control , ofdma system , subcarrier allocation , system utility maximization . |
in stochastic analysis for diffusion processes , the bismut formula ( also known as bismut elworthy li formula due to ) and the integration by parts formula are two fundamental tools .let , for instance , be the ( nonexplosive ) diffusion process generated by an elliptic differential operator on a riemannian manifold , and let be the associated markov semigroup . for and ,the bismut formula is of type where is the diffusion process starting at point , is a random variable independent of and is the directional derivative along .when the curvature of the diffusion operator is bounded below , this formula is available with explicitly given by and the curvature operator .there exist a number of applications of this formula , in particular , letting be the density ( or heat kernel ) of w.r.t .a nice reference measure , we have , formally , from ( [ 1.1 ] ) one may also derive gradient - entropy estimates of and thus , the following harnack inequality introduced in ( see ) : \\[-10pt ] \eqntext{t>0 , p>1 , x , y\in m , f\in b_b(m),}\end{aligned}\ ] ] where is determined by moments of and thus , independent of .this type of harnack inequality is a powerful tool in the study of contractivity properties , functional inequalities and heat kernel estimates ; see , for example , and references within . on the other hand , to characterize the derivative of in , which is essentially different from that in when is not symmetric w.r.t . , we need to establish the following integration by parts formula ( see ) : for a smooth vector field and some random variable . combining this formula with ( [ 1.1 ] ) ,we are able to estimate the commutator which is important in the study of flow properties ; see , for example , .similar to ( [ 1.1 ] ) , inequality ( [ 1.3 ] ) can be used to derive a formula for and the shift harnack inequality of type \bigr ) ( x ) \mathrm{e}^{c_p(t , x , y ) } , \nonumber\\[-10pt]\\[-10pt ] \eqntext{t>0 , p>1 , x , y\in m , f\in b_b(m),}\end{aligned}\ ] ] where , is the exponential map on the riemannian manifold . differently from usual harnack inequalities like ( [ 1.2 ] ) , in ( [ 1.4 ] ) the reference function , rather than the initial point , is shifted .this inequality will lead to different heat kernel estimates from known ones implied by ( [ 1.2 ] ) . before moving on ,let us make a brief comment concerning the study of these two formulas .the bismut formula ( [ 1.1 ] ) has been widely studied using both malliavin calculus and coupling argument ; cf . and references within .although ( [ 1.3 ] ) also has strong potential of applications , it is , however , much less known in the literature due to the lack of efficient tools . to see that ( [ 1.3 ] )is harder to derive than ( [ 1.1 ] ) , let us come back to where an explicit version of ( [ 1.3 ] ) is established for the brownian motion on a compact riemannian manifold . unlike the bismut formula which only relies on the ricci curvature , driver s integration by parts formula involves both the ricci curvature and its derivatives .therefore , one can imagine that in general ( [ 1.3 ] ) is more complicated ( and hence harder to derive ) than ( [ 1.1 ] ) . to establish the integration by parts formula and the corresponding shift harnack inequality in a general framework , in this paper we propose a new coupling argument .in contrast to usual coupling arguments where two marginal processes start from different points and meet at some time ( called the coupling time ) , for the new - type coupling the marginal processes start from the same point , but their difference reaches a fixed quantity at a given time . in the next section , we will introduce some general results and applications on the integration by parts formula and the shift harnack inequality using the new coupling method . the general result obtained in section [ sec2 ]will be then applied in section [ sec3 ] to a class of degenerate diffusion processes , in section [ sec4 ] to delayed sdes and in section [ sec5 ] to semi - linear spdes .we remark that the model considered in section [ sec3 ] goes back to the stochastic hamiltonian system , for which the bismut formula and the harnack inequalities have been investigated in by using both coupling and malliavin calculus .as will be shown in section [ sec2.1 ] with a simple example of this model , for the study of the integration by parts formula and the shift harnack inequalities , the malliavin calculus can be less efficient than the new coupling argument .in section [ sec2.1 ] we first recall the argument of coupling by change of measure introduced in for the harnack inequality and the bismut formula , and then explain how can we modify the coupling in order to derive the integration by parts formula and the shift harnack inequality , and introduce the malliavin calculus for the study of the integration by parts formula . in the second subsection we present some applications of the integration by parts formula and the shift harnack inequalities to estimates of the heat kernel and its derivatives . for a measurable space ,let be the class of all bounded measurable functions on , and the set of all nonnegative elements in .when is a topology space , we always take to be the borel -field , and let [ resp ., be the set of all bounded ( compactly supported ) continuous functions on .if , moreover , is equipped with a differential structure , for any let be the set of all elements in with bounded continuous derivatives up to order , and let . finally , a contraction linear operator on is called a markov operator if it is positivity - preserving with .let and be two probability measures on a measurable space , and let be two -valued random variables w.r.t . a probability space . if the distribution of is , while under another probability measure on the distribution of is , we call a coupling by change of measure for and with changed probability . if and are distributions of two stochastic processes with path space , a coupling by change of measure for and is also called a coupling by change of measure for these processes . in this case and are called the marginal processes of the coupling . now , for fixed , consider the path space } ] .let be a transition probability such that .for any ] on with . in order to establish the harnack inequality , for any two different points ,one constructs a coupling by change of measure for and with changed probability such that .then this implies a harnack inequality of type ( [ 1.2 ] ) if . to establish the bismut formula ,let , for example , be a banach space , and .one constructs a family of couplings by change of measure for and with changed probability such that ] , be a family of couplings by change of measure for and with changed probability such that .\ ] ] if and the proof is similar to that introduced above for the harnack inequality and the bismut formula .note that .we have next , by the young inequality ( see , lemma 2.4 ) , for positive we have noting that , we obtain provided and exists in . from theorem [ t2.1 ] and its proof we see that the machinery of the new coupling argument is very clear .so , in applications the key point of the study lies in the construction of new type couplings .next , we explain how one can establish the integration by parts formula using malliavin calculus .let , for example , be the cylindrical brownian motion on an hilbert space w.r.t . a probability space with natural filtration .let ;h \bigr)\dvtx \|h \|_{h^1}^2:=\int_0^t \bigl|h'(s ) \bigr|^2\,\d s<\infty \biggr\}\ ] ] be the cameron martin space .for a measurable functional of , denoted by , such that and gives rise to a bounded linear operator .then we write and call the malliavin gradient of .it is well known that is a densely defined closed operator on ; see , for example , , section 1.3 .let be its adjoint operator , which is also called the divergence operator .[ ma ] let and be introduced above .let and . if there exists such that , then since , we have finally , as the integration by parts formula ( [ iit ] ) and by the young inequality ( see , lemma 2.4 ) imply the derivative - entropy inequality \biggr\}p_tf,\qquad \delta>0\end{aligned}\ ] ] and the -derivative inequality according to the following result it also implies shift harnack inequalities .[ pis ] let be a markov operator on for some banach space .let .let and .then holds for any positive if and only if \\[-14pt ] & & { } \times\exp \biggl[\int_0 ^ 1 \frac{pr}{1+(p-1)s } \beta_e \biggl(\frac{p-1}{r+r ( p-1)s } , \cdot+sre \biggr ) \,\d s \biggr]\nonumber\end{aligned}\ ] ] holds for any positive and .let be a constant .then is equivalent to the proof of ( 1 ) is similar to that of , proposition 4.1 , while ( 2 ) is comparable to , proposition 1.3 . let ] w.r.t . we prove ( [ w12bb ] ) .next , let be fixed , and assume that ( otherwise , simply use to replace ) .then ( [ w12bb ] ) with implies that -pf(z)\biggr\ } \\ & & \qquad= \delta p(f\log f)(z ) + \beta_e(\delta ) pf(z).\end{aligned}\]]therefore , ( [ w12ba ] ) holds .let . for nonnegative , ( [ l2 g] ) implies that noting that we obtain minimizing the right - hand side in , we prove ( [ sht ] ) . on the other hand ,let .without loss of generality we assume that , otherwise it suffices to replace by . then ( [ sht ] ) implies that therefore , ( [ l2 g ] ) holds . to conclude this section, we would like to compare the new coupling argument with known coupling arguments and the malliavin calculus , from which we see that the study of the integration by parts formula and the shift harnack inequality is , in general , more difficult than that of the bismut formula and the harnack inequality . first , when a strong markov process is concerned , for a usual coupling one may ask that the two marginal processes move together after the coupling time , so that to ensure , one only has to confirm that the coupling time is not larger than the given time . but for the new coupling argument , we have to prove that at time , the difference of the marginal processes equals to a fixed quantity , which can not be ensured , even if the difference already reached this quantity at a ( random ) time before . from thiswe see that construction of a new - type coupling is , in general , more difficult than that of a usual coupling .second , it is well known that the malliavin calculus is a very efficient tool to establish bismut - type formulas . to see the difficulty for deriving the integration by parts formula using malliavin calculus , we look at a simple example of the model considered in section [ sec3 ] , that is , is the solution to the following degenerate stochastic equation on : where is the one - dimensional brownian motion and . for this modelthe bismut formula and harnack inequalities can be easily derived from both the coupling method and malliavin calculus ; see .we now explain how can one establish the integration by parts formula using malliavin calculus . for fixed and , for example , , to derive the integration by parts formula for the derivative along using theorem [ ma ], one needs to find such that to search for such an element , we note that ( [ ee ] ) implies and where then , ( [ dh ] ) is equivalent to it is , however , very hard to solve from this equation for general . on the other hand , we will see in section [ sec3 ] that the coupling argument we proposed above is much more convenient for deriving the integration parts formula for this example .we first consider for some , and to estimate the density w.r.t .the lebesgue measure for distributions and markov operators using integration by parts formulas and shift harnack inequalities .[ t4.1 ] let be a random variable on such that for some the distribution of has a density w.r.t .the lebesgue measure , which satisfies consequently , for any and any convex positive function , for any , \(1 ) we first observe that if has density , then for any , this implies ( [ dd0 ] ) . to prove the existence of ,let be the distribution density function of , where is the standard gaussian random variable on independent of and .it follows from ( [ 4.1 ] ) that then so , the sequence is bounded in .thus , up to a subsequence , in for some nonnegative function . on the other hand , we have weakly . therefore , .\(2 ) as for the second assertion , noting that for one has it follows from ( [ 4.1 ] ) that next , we consider applications of a general version of the shift harnack .let be a transition probability on a banach space .let be the associated markov operator .let be a strictly increasing and convex continuous function .consider the shift harnack inequality for some and constant . obviously ,if for some , then this inequality reduces to the shift harnack inequality with power , while when , it becomes the log shift harnack inequality .[ t4.2 ] let be given above and satisfy ( [ ph ] ) for all and some nonnegative measurable function on .then consequently : if , then has a transition density w.r.t .the lebesgue measure such that if for some , then let such that . by ( [ ph ] )we have integrating both sides w.r.t . and noting that , we obtain this implies ( [ aa0 ] ) .when , ( [ aa0 ] ) implies that since by the strictly increasing and convex properties we have as .now , for any lebesgue - null set , taking we obtain from that therefore , applying ( [ aa2 ] ) to we obtain which goes to zero as .thus is absolutely continuous w.r.t .the lebesgue measure , so that the density function exists , and ( [ aa1 ] ) follows from ( [ aa0 ] ) by taking .finally , let for some .for fixed , let it is easy to see that .then it follows from ( [ aa0 ] ) with that then ( [ aa ] ) follows by letting . finally , we consider applications of the shift harnack inequality to distribution properties of the underlying transition probability .[ t4.3 ] let be given above for some banach space , and let ( [ ph ] ) hold for some , finite constant and some strictly increasing and convex continuous function . is absolutely continuous w.r.t . . if for some strictly increasing positive continuous function on .then the density satisfies for -null set , let .then ( [ ph ] ) implies that , hence since for .therefore , is absolutely continuous w.r.t . . next ,let . applying ( [ ph ] ) for and noting that we obtain the proof is complete by letting .consider the following degenerate stochastic differential equation on : where and are two matrices of order and , respectively , is measurable with for , are invertible -matrices measurable in such that the operator norm is locally bounded and is the -dimensional brownian motion .when this equation is degenerate , and when we set , so that the first equation disappears and thus , the equation reduces to a nondegenerate equation on . to ensure the existence of the transition density ( or heat kernel ) of the associated semigroup w.r.t . the lebesgue measure on , we make use of the following kalman rank condition ( see ) which implies that the associated diffusion is subelliptic , =m ] with in , define then is invertible ; cf . . forany and , let be the ball centered at with radius .[ t3.1 ] assume and that the solution to ( [ 3.1 ] ) is nonexplosive such that } \mathbf e \bigl\{\sup _ { b(x(t ) , y(t ) ; r ) } \bigl|\nabla z(t,\cdot ) \bigr|^2 \bigr\}<\infty,\qquad r>0.\ ] ] let ) ] let solve the equation then it is easy to see that combining this with and ( [ ps ] ) , we see that and therefore , .\ ] ] next , to see that is a coupling by change of measure for the solution to ( [ 3.1 ] ) , reformulate ( [ ccc0 ] ) as where .}\end{aligned}\ ] ] let and .\ ] ] by lemma [ ll ] below and the girsanov theorem , is a -dimensional brownian motion under the probability measure .therefore , is a coupling by change of measure with changed probability .moreover , combining ( [ sol ] ) with the definition of , we see from ( [ uu ] ) that holds in . then the proof is complete by theorem [ t2.1](2 ) .[ ll ] let the solution to ( [ 3.1 ] ) be nonexplosive such that ( [ uu ] ) holds , and let be in ( [ xi ] ) . then for any ] .let .then as .it suffices to show that ,n\ge1}\mathbf e \bigl\ { r_\varepsilon(t \land \tau _ n)\log r_\varepsilon(t\land\tau_n ) \bigr \ } < \infty.\ ] ] by ( [ sol ] ) , there exists such that ,\varepsilon\in[0,1].\ ] ] let . by the girsanov theorem , } ] , ; ; for , ; for strictly positive , according to remark [ rem3.1](b ) , and the jensen inequality , we only need to prove for ] . to fix the other reference function in theorem [ t3.1 ] ,let be such that take .\ ] ] then and for .since , we conclude that .therefore , ( [ ps ] ) holds .it is easy to see that \ ] ] holds for some constant .combining this with ( [ q ] ) , ( [ sol ] ) and the boundedness of and , we obtain \\[-8pt ] \bigl|\theta(t ) \bigr|&\le & c\bigl(t^{-(2k+1)}|e_1|+ |e_2| \bigr ) \nonumber\end{aligned}\ ] ] for some constant . from this and theorem [ t3.1 ], we derive the desired assertions . in the situation of corollary [ c3.2 ] .let be the operator norm from to w.r.t .the lebesgue measure on . then there exists a constant such that \\[-12pt ] \eqntext{p>1 , t>0.}\end{aligned}\ ] ] consequently , the transition density of w.r.t .the lebesgue measure on satisfies by corollary [ c3.2](1 ) , ( [ cc ] ) follows from ( [ aa0 ] ) for , and moreover , ( [ hhh ] ) follows from ( [ aa ] ) .[ exa3.1 ] a simple example for theorem [ c3.2 ] to hold is that and are independent of with , and .in this case we have ; that is , the dimension of the generate part is controlled by that of the nondegenerate part . in general , our results allow to be much larger than .for instance , let for some and then and holds for .therefore , assertions in corollary [ c3.2 ] hold for .the purpose of this section is to establish driver s integration by parts formula and shift harnack inequality for delayed stochastic differential equations . in this casethe associated segment processes are functional - valued , and thus , infinite - dimensional . as continuation to section [ sec3 ] ,it is natural for us to study the generalized stochastic hamiltonian system with delay as in , where the bismut formula and the harnack inequalities are derived using coupling .however , for this model it seems very hard to construct the required new - type couplings .so , we only consider here the nondegenerate setting .let be a fixed number , and let ;\mathcal r^d) ] .consider the following stochastic differential equations on : where is the brownian motion on , is measurable such that is locally bounded in , and is measurable with locally bounded .we remark that the local boundedness assumption of is made only for simplicity and can be weakened by some growth conditions as in .now , for any , let be the solution to ( [ 4.1 ] ) for , and let be the associated segment process .let we aim to establish the integration by parts formula and shift harnack inequality for .it turns out that we are only able to make derivatives or shifts along directions in the cameron martin space [ t3.1 ] let and be fixed .for any ) , \vspace*{2pt } \cr \eta'(t - t ) , & \quad if .}\ ] ] let for ] , let solve the equation then it is easy to see that .\ ] ] in particular , . next , let .\end{aligned}\ ] ] by the girsanov theorem , under the changed probability , the process \ ] ] is a -dimensional brownian motion .so , is a coupling by change of measure with changed probability .then the desired integration by parts formula follows from theorem [ t2.1 ] since and due to ( [ th0 ] ) , holds in .taking , we have then , \\ & & \qquad\le\frac1 2 \log\mathbf e\exp \biggl [ \frac{2 k(t)^2}{\delta ^2 } \int _ 0^t \bigl|\gamma ( t)-\nabla_{\theta_t}b(t , \cdot ) ( x_t ) \bigr|^2\,\d t \biggr ] \\ & & \qquad\le\frac{2k(t)^2(1+t^2\kappa(t)^2)}{\delta^2 } \biggl(\|\eta\| _ \mathcal h^2 + \frac{|\eta(-\tau)|^2}{t-\tau } \biggr).\end{aligned}\ ] ] then the second result in ( 1 ) follows from the young inequality .\end{aligned}\ ] ] finally , ( 2 ) and ( 3 ) can be easily derived by applying theorem [ t2.1 ] for the above constructed coupling with , and using ( [ th0 ] ) and ( [ llo ] ). from theorem [ t3.1 ] we may easily derive regularization estimates on , the distribution of . for instance , theorem [ t3.1](1 ) implies estimates on the derivative of along for and measurable ; and due to theorems [ t4.3 ] , [ t3.1](2 ) and [ t3.1](3 ) imply some integral estimates on the density for .moreover , since is dense in , the shift harnack inequality in theorem [ t3.1](2 ) implies that has full support on for any and .the purpose of this section is to establish driver s integration by parts formula and shift harnack inequality for semi - linear stochastic partial differential equations .we note that the bismut formula has been established in for a class of delayed spdes , but for technical reasons we only consider here the case without delay .let be a real separable hilbert space , and a cylindrical wiener process on with respect to a complete probability space with the natural filtration .let and be the spaces of all linear bounded operators and hilbert schmidt operators on , respectively .denote by and the operator norm and the hilbert schmidt norm , respectively .consider the following semi - linear spde : where is a linear operator on generating a contractive , strongly continuous semigroup such that . is measurable , and frchet differentiable in the second variable such that is locally bounded in . is measurable and locally bounded , and is invertible such that is locally bounded in .then the equation ( [ eq1 ] ) has a unique a mild solution ( see ) , which is an adapt process on such that let finally , for any , let [ t5.1 ] let and be fixed .let , for ] , let solve the equation then it is easy to see that .\ ] ] in particular , . next , let .\end{aligned}\ ] ] by the girsanov theorem , under the weighted probability , the process \ ] ] is a -dimensional brownian motion .so , is a coupling by change of measure with changed probability .then the desired integration by parts formula follows from theorem [ t2.1 ] since and due to ( [ th ] ) , holds in .this formula implies the second inequality in ( 1 ) due to the given upper bounds on and and the fact that \\ & & \qquad \le\frac\delta2 \log\mathbf e\exp \biggl[\frac2 { \delta^2 } \int_0^t \bigl|\sigma ( t)^{-1 } \bigl(e- \nabla_{e(t)}b(t,\cdot ) \bigl(x(t ) \bigr ) \bigr ) \bigr|^2\,\d t \biggr]p_tf.\end{aligned}\ ] ] finally , since , ( 2 ) and ( 3 ) can be easily derived by applying theorem [ t2.1 ] for the above constructed coupling with .the author would like to thank the referees for helpful comments and corrections . | a new coupling argument is introduced to establish driver s integration by parts formula and shift harnack inequality . unlike known coupling methods where two marginal processes with different starting points are constructed to move together as soon as possible , for the new - type coupling the two marginal processes start from the same point but their difference is aimed to reach a fixed quantity at a given time . besides the integration by parts formula , the new coupling method is also efficient to imply the shift harnack inequality . differently from known harnack inequalities where the values of a reference function at different points are compared , in the shift harnack inequality the reference function , rather than the initial point , is shifted . a number of applications of the integration by parts and shift harnack inequality are presented . the general results are illustrated by some concrete models including the stochastic hamiltonian system where the associated diffusion process can be highly degenerate , delayed sdes and semi - linear spdes . |
statistical physics tells us that systems of many interacting dynamical units collectively exhibit a behavior which is determined by only a few basic dynamical features of the individual units and of the embedding dimension but independent of all other details .this feature which is specific to critical phenomena , like in continuous phase transitions , is known as universality .there is enough empirical evidence that a number of social phenomena are characterized by simple emergent behavior out of the interactions of many individuals . in recent years, a growing community of researchers have been analyzing large - scale social dynamics to uncover universal patterns and also trying to propose simple microscopic models to describe them , similar to the minimalistic models used in statistical physics .these studies have revealed interesting patterns and behaviors in social systems , e.g. , in elections , growth in population and economy , income and wealth distributions , financial markets , languages , etc .( see refs . for reviews ) .academic publications ( papers , books etc . )form an unique social system consisting of individual publications as entities , containing bibliographic reference to other older publications , and this is commonly termed as _ citation_. the number of citations is a measure of the importance of a publication , and serve as a proxy for the popularity and quality of a publication .there has already been a plethora of empirical studies on citation data , specifically on citation distributions of articles , time evolution of probability distribution of citation , citations for individuals and even their dynamics , and the modeling efforts on the growth and structure of citation networks have produced a huge body literature in network science concerning scale - free networks , and long - time scientific impact .the bibliometric tool of citation analysis is becoming increasingly popular for evaluating the performance of individuals , research groups , institutions as well as countries , the outcomes of which are becoming important in case of offering grants and awards , academic promotions and ranking , as well as jobs in academia , industry and otherwise .since citations serve as a crucial measure for the importance and impact of a research publication , its precise analysis is extremely important .annual citations and impact factor of journals are of key interest , primarily from the point of view of journals themselves , and secondarily from the perspective of authors who publish their papers in them . wide distributions of both annual citations and impact factors are quite well studied .it is quite usual to find that some publications do better than others due to the inherent heterogeneity in the quality of their content , the gross attention on the field of research , the relevance to future work and so on .thus different publications gather citations in time at different rates and result in a broad distribution of citations . in 1957 , shockley claimed that the scientific publication rate is dictated by a lognormal distribution , while a later evidence based on analysis of records for highly cited physicists claim that the citation distribution of individual authors follow a stretched exponential .however , an analysis of data from isi claims that the tail of the citation distribution of individual publications decays as a power law with an exponent close to , while a rigorous analysis of 110 years of data from physical review concluded that most part of the citation distribution fits remarkably well to a lognormal .the present consensus lies with the fact that while most part of the distribution does fit to a lognormal , the extreme tail fits to a power law .it has been shown earlier that the distribution of citations to papers within a discipline has a broad distribution , which is universal across broad scientific disciplines , using a relative indicator , where is the average citation within a discipline .however , it has also been shown later that this universality is not absolutely guaranteed .subsequent work on citations and impact factors has revealed interesting patterns of universality , some alternative methods have been proposed and there are also interesting work on citation biases .some studies also report on the possible lack of universality in the citation distribution at the level of articles . a rigorous and detailed study on the citation distributions of papers published in 2005 - 2008 for 500 institutions reveals that using the analysis ref . , universality condition is not fully satisfied , but the distributions are found to be very similar .there have also been studies at the level of countries in the same direction . in this article , we focus on citations received by individual ( i ) academic institutions and ( ii ) academic journals .we perform the analysis primarily for all articles and reviews , as well as all citable documents . while institutions can vary in their quality of scientific output measurable in terms of total number of publications , total citations etc ., here we show for the first time that irrespective of the institution s scientific productivity , ranking and research impact , the probability that the number of citations received by a publication is a broad distribution with an universal functional form .in fact , using a relative indicator , where is the average number of citations to articles published by an institution in a certain year , we show that the effective probability distribution function that an article has citations has the same mathematical form .we present evidence for the fact that this holds roughly across time for most institutions irrespective of the scientific productivity of the institution considered .when we carry out a similar analysis on journals , we find similar results .the scaled distributions fit to a lognormal distribution for most of their range .again , we find that these features roughly hold across time and across journals within the same class .the largest citations for academic institutions as well as the journals seem to fit well to a power law .we also present evidence that each of these sampled groups institutions , and journals are distinct with the absolute measure of inequality as computed from their distribution functions , with high absolute inequality for both these sets , the gini coefficients being around and for institutions and journals respectively .we also find that the top % of the articles fetch about % of the total citations for institutions and the top % of the articles fetch about % of the total citations for journals .we collected data from 42 academic institutions across the world .institutions were selected such that they produce considerable amount of papers ( typically 200 or more ) so that reasonable statistics could be obtained . however , there were exceptions for certain years for particular institutions .all papers published with at least one author with the institution mentioned as affiliation were collected .this was done for 4 years 1980 , 1990 , 2000 , 2010 .we also selected 30 popular academic journals across physics , chemistry , biology and medicine .however , for some journals , only 3 years of data could be collected , since they were launched after 1980 .the citable papers considered in this study are articles and reviews , although we compare the results with the same analysis done on all citable documents .we study the data of number of citations to publications from different years , from isi web of science for several ( i ) academic institutions ( research institutes and universities ) and ( ii ) popular journals .it is to be noted that citations to individual publications arrive from any publication indexed in isi web of science and does not mean only internal citations within the journal in which it is published .we analyzed data of science publications from 42 academic institutions and 30 popular journals .we recorded the data for the number of papers published , the total number of citations to each of the publications , for a few years ( 1980 , 1990 , 2000 , 2010 for most cases ) . since citations grow with time , we have studied publications which are at least 4 years old ( from 2010 ) or more ( 1980 , 1990 , 2000 ) to rule out any role of transients .we also collect data from academic institutions and journals which have a comparatively large number of publications , so as to produce good statistics , and minimize the effects of aberration that can result from fluctuations of the quantities measured from small data sets .we collected citation data until date for all articles and reviews from a particular year ( e.g. 1980 , 1990 , 2000 , 2010 ) . for each year , the probability distribution of citations for an academic institution was observed to be broad .for instance , fig .[ fig : inst_all_1990]a shows the plot of vs. for various institutions for publications from 1990 .we rescaled the absolute value of citation for each year by the average number of citations per publication , and plotted this quantity against the adjusted probability ( fig .[ fig : inst_all_1990]b ) ( see similar plots for 1980 and 2000 in fig . s1 of si ) .we remarkably find that the distributions collapse into an universal curve irrespective of the wide variation in the academic output of the different institutions .the scaling collapse is good for more than 3 decades of data and over 5 orders of magnitude .the average number of papers , total citations and the average number of citations per publication are shown in table s3 .the rescaled curves fit well to a lognormal \label{eq : ln}\ ] ] with with , for a considerable range of the distribution .however , if one fits a lognormal distribution to individual sets , the range of parameters are quite narrow , lies in the range to , while lies in the range to .the fitting were performed using a least square fitting routine .for lowest values of the abscissa , seems to follow or slowly growing , as .however , the largest citations deviate from the lognormal fit and are better described according to , with ( see si table s6 for exponents for other years ) .the power law exponent has been estimated using the maximum likelihood estimate method ( mle ) . in order to investigateif the distributions for different institutes vary with time , we plot the same for each institution for several years .the rescaled plots show scaling collapse indicating that although the average citations vary over years , the form of the distribution function remain roughly invariant , when scaled with the average number of citations .[ fig : inst_years ] shows the plot for 1990 . to check if this also holds for time - aggregated data , we collected citations for all papers published during the period 2001 - 2005 for the same set of institutions , and repeated the above analysis ( see si . fig .s2 ) .we collected citation data until date for all articles and reviews in individual journals for several years ( e.g. 1980 , 1990 , 2000 , 2010 etc . ) . for each year , the probability distribution of citations was again observed to be broad . as in the case of institutions , we plotted against the adjusted probability ( fig .[ fig : joun_all ] ) . for a particular journal, it is observed that the curves follow similar distributions over years although the average number of papers , total citations and hence the average number of citations vary ( see table s4 in si for details ) .further , we plot the same quantity for a particular year for different journals ( see fig . [fig : jour_class_1990 ] for 1990 ) , and find that the curves roughly collapse into an single curve irrespective of the wide variation in the output of the different journals .the bulk of the rescaled distribution fits well to a lognormal form with and , as was observed in case of institutions , while the largest citations fit better to a power law , with ( see si table s6 for exponents for other years ) . to check if this also holds for time - aggregated data , we collected citations for all papers published during the period 2001 - 2005 for the same set of journals and repeated the analysis ( see si . fig .however , we observed that if we consider all citable documents , two distinct classes of journals emerge according to the shape of the distributions to which the curves collapse .the first group is a _general _ class , for which most of the distribution fits well to a lognormal function even quite well for the lowest values of the abscissa .this is similar to what is observed for all journals if we consider only articles and reviews .the other group , which we call the _ elite _ class ( si fig .s5 ) is also broadly distributed but has a distinct and faster monotonic decay compared to the _ general _ class , where , i.e. , with .this divergence at the lowest values of citations also indicate that the _ elite _ journals have a larger proportion of publications with less number of citations although their average number of citations is larger than those for the general class .however , for both the above classes , the largest citations still follow a power law , with for the _ general _ class and for the _ elite _ class .we reason for such a behavior in the _ elite _ journal class is because of a large fraction of uncited documents .if we consider only articles and reviews , is usually 2 - 10% .considering all citable documents , this fraction does not change appreciably for the _ elite _ class of journals , and can be anything in the range 25 - 80% ( see values of in si table s5 ) , and are primarily in the category of news , correspondence , editorials etc .such documents in the _ general _ class is either absent or are very few .we are able to find at least 7 journals ( see si fig .s5 and table s5 ) in the _ elite _ class while most of others belong to the _ general _ class .the power law tail in all distribution suggests that the mechanism behind the popularity of the very highly cited papers is a ` rich gets richer ' phenomena ( see fig . s3 of si for 1980 and 2000 ) . following ref . , we rank all articles belonging to different institutions according to and .we then compute the percentage of publications of each institution that appear in the top % of the global rank .the percentage for each should be around % with small fluctuations if the ranking is good enough .the same is performed for journals .when ranking is done according to unnormalized citations then the frequency distribution of % of papers is wide .however , if the ranking is done according to normalized citations , then the frequency distribution is much narrow .for example , we show the results for institutions and journals in fig . [fig : top10 ] if % .assuming that articles are uniformly distributed on the rank axis , the expected average bin height must be z% with a standard deviation given by where is the number of entries ( institutions or journals ) and is the number of papers for the -th institution or journal . for institutions , when the ranking is done according to we observe that the theoretically calculated value ( from above equation ) of is compared to as computed directly from the fitting , while if the ranking was done according to , is .similarly for journals , computed from the above equation is compared to as computed from the fitting , while if the ranking was done according to , is .this indicates that is indeed an unbiased indicator , as seen earlier .we calculate absolute measures of inequality like the commonly used gini index as well as the -index which tells us that the top cited fraction of papers have fraction of citations , and we report in si tables .s3 , s4 , s5 . for academic institutions , gini index and , which means around citations come from the top papers . for journals , , which means about citations come from the top papers .we further note that gini and indices fluctuate less around respective mean values and as the number of articles and number of citations become large ( fig .[ fig : gk_conv ] ) . for academic institutions ,the values are for gini and .for journals , the values are and .in this article we analyze whether the citations to science publications from academic institutions ( universities , research institutes etc . ) as well as journals are distributed according to some universal function when rescaled by the average number of citations . for institutions , it seems to fit roughly to a log - normal function .the largest citations , however , deviate from the lognormal fit , and follow a power law decay .this rough universality claim is an interesting feature , since for institutions , the quality of scientific output measurable in terms of the total number of publications , total citations etc .vary widely across the world as well as in time .nevertheless , the way in which the number of papers with a certain number of citations is distributed is quite similar , seems to be quite independent of the quality of production / output of the academic institution .although there has been claims that the form of the distribution of citations for different scientific disciplines are the same , albeit deviations , it is also true that each discipline is characterized by a typical average number of citations . as a matter of fact , that different institutions have a varying strength of publication contribution towards different disciplines makes the issue of obtaining a universal function for the resulting ( effective ) distribution of citations ( for the institution ) quite nontrivial . in other words ,different academic institutions have a variety in the strength of their academic output , in terms of variation of representations across different disciplines and the amount of citations gathered .this does not necessarily guarantee that the universality which has been already reported across disciplines will still hold when one looks at data from different institutions , rest aside the counter claims about lack of universal character for citation distribution across distinct disciplines .there are already critical studies on the citation distribution of universities using larger data sets , which raises issues on the nature of universality .we observe similar features for academic journals the bulk of the probability distribution fitting reasonably well to a lognormal while the highest cited papers seem to fit well to a power law decay with a similar exponent ( ) .we note that the exponents are consistently less than , the exponent of the full citation distribution , which is due to the fact that our data are very small subsets , which fall short of catching the correct statistical behavior of all of the highest cited papers .our results indicate that dividing citation counts by their average indeed helps to get closer to universal citation distributions .however , the results also indicate that , even after such a rescaling , are substantially larger than the theoretical values compared to while it is for unscaled data for institutions and compared to while it is for unscaled data for journals .this indicates that the universality is not very strong , and holds only in an approximate sense . shows similar evidence for institutions , claiming the absence of universality but pointing out the similarity between the distributions .another previous study on different fields of science also reported that this universality claim does not hold very well for all fields .we further note that the inequality in the distribution of citations of institutions and journals differ quantitatively .as the number of papers and citations increase , the absolute measures of inequality like gini and indices seem to converge to different values for the above two sets .the values of gini index are and for institutions and journals respectively .the index values suggest that the top % of the articles hold about % of the total citations for institutions and the top % of the articles hold about % of the total citations for journals .the authors thank s. biswas for discussions , j .- i .inoue , s. redner and p. sen for useful comments .a.c . and b.k.c .acknowledge support from b.k.c.s j. c. bose fellowship research grant .inoue ji , ghosh a , chatterjee a , chakrabarti bk ( 2015 ) measuring social inequality with quantitative methodology : analytical estimates and empirical data analysis by gini and indices .physica a 429 : 184 - 204 . | citations measure the importance of a publication , and may serve as a proxy for its popularity and quality of its contents . here we study the distributions of citations to publications from individual academic institutions for a single year . the average number of citations have large variations between different institutions across the world , but the probability distributions of citations for individual institutions can be rescaled to a common form by scaling the citations by the average number of citations for that institution . we find this feature seem to be universal for a broad selection of institutions irrespective of the average number of citations per article . a similar analysis for citations to publications in a particular journal in a single year reveals similar results . we find high absolute inequality for both these sets , gini coefficients being around and for institutions and journals respectively . we also find that the top % of the articles hold about % of the total citations for institutions and the top % of the articles hold about % of the total citations for journals . |
multimedia images are widely used in internet communications , so the need for securely transmitting these images over networks has become an essential part of the field of data security . image cryptography plays a vital role in securing confidential images . the purpose of image cryptography is to hide the content of the images by encrypting them so as to make the images unrecognizable to the intruders .one part of this paper deals with a new image cryptosystem using 2d cellular automata .+ in general , there are two different methods to protect an image ; they are ( i ) image shuffling and ( ii ) image encryption .pixels positions are rearranged in image shuffling whereas in image encryption , pixel values and positions are changed . in both the cases it is very essential to check the security of the method .that means the method should be invulnerable to all attacks .poorly protected images will always provide information about the original image in statistical analysis . if the encrypted image is indistinguishable from the random image , statistical analyses will not have advantage to break . so testing the randomness in the pixels of encrypted image is the state of the art . in the literaturealready there are different tests for checking randomness for 1d data .a number of parametric tests are designed for pixel randomness in shuffled and encrypted images .a non - parametric test is developed in this paper for checking randomness in the image pixels , which is the first of its kind . + john von neumann proposed a new emerging concept called cellular automata ( ca) .ca is a discrete model consisting of regular grid of cells , each in one of the finite number of states .according to some fixed rule , the state of each cell will be changed in terms of the state of the current cell and the states of the cells in its neighborhood .like 1d , higher dimension ca also can be defined .it is already proved that some of the rules of ca will be able to generate complex random patterns . in the last two decades 1d and 2dca s are used in cryptography .lot of research is going on for ca based image cryptography .the paper is organized as follows : section 2 describes the basics of 2d cellular automata and the 2d ca concepts used for our encryption scheme .section 3 presents our new non - parametric test for pixel randomness .section 4 describes our 2d ca based encryption scheme .section 5 presents the simulation results and performance evaluations .section 6 gives the concluding remarks .the extensional behavior of 1d cellular automata is also able to produce complicated patterns in two - dimension .this extension is significant since this is compared with pattern formation in physical systems .2d ca is a regular 2d lattice of cells .each cell has n possible values and is updated in each discrete time steps according to a rule f that depends on the value of sites in some neighbor around it .there are different types of lattices and neighborhood structures in a 2d ca .figure 1 shows the two familiar neighborhood structures named as von - neumann neighborhood and moore neighborhood .the value of of a cell at position in time in a 2d ca with a rule f , that depends only on the cells according to von - neumann neighborhood is evolved from .+ ( * a * ) ( -1.9,-1.9 ) grid ( 1.4,1.4 ) ; ( -0.5,-0.5 ) rectangle ( 0,0 ) ; ( -0.5,0 ) rectangle ( 0,0.5 ) ; ( 0,-0.5 ) rectangle ( 0.5,0 ) ; ( -0.5,-1 ) rectangle ( 0,-0.5 ) ; ( -1,-0.5 ) rectangle ( -0.5,0 ) ; ( * b * ) ( 2.1,-1.9 ) grid ( 5.4,1.4 ) ; ( 3.5,-0.5 ) rectangle ( 4,0 ) ; ( 3,0 ) rectangle ( 4.5,0.5 ) ; ( 3,-1 ) rectangle ( 3.5,0 ) ; ( 3.5,-1 ) rectangle ( 4.5,-0.5 ) ; ( 4,-0.5 ) rectangle ( 4.5,0 ) ; + 1d and 2d ca are studied differently with the help of polynomial algebra and matrix algebra by researchers .p.p.choudary et.al.already have studied and designed a new characteristic of 2d ca .these concepts are used for our proposed encryption method .2d nine neighborhood moore s neighborhood is used here for evolutions .different rules are defined in terms of dependencies from the following rule convention .+ [ cols="^,^,^,<",options="header " , ] * table 2 : * scores of test images .we have introduced a new test using non - parametric method in statistics for measuring the randomness in the image pixels .it scores the images , depending upon how far their pixels are i.i.d .the newly designed 2d ca based encryption scheme first analysed by standard evaluation methods and then the encrypted images are used to validate our newly proposed non - parametric test .simulation results clearly show that this test can be used for evaluating the quality of any image shuffling and image encryption method.the complexity of the method is , so it can be efficiently implemented . in the proposed non - parametric test we have only used the basic run test to measure the randomness .balasuyambu jeyaram , rama raghavan , krishna shankara narayanan , new ca based key generation for a robust rgb color image encryption scheme , international journal of computer applications(0975 8887 ) , volume 80-no.7 , october 2013 , pp.45 - 50 .balasuyambu jeyaram , rama raghavan , new ca based image encryption - scaling scheme using wavelet transform , journal of systemics , cybernatics and informatics , volume 12- number 3- year 2014 , issn : 1690 - 4524 , pp.66 - 71 .p. p. choudhury , birendra kumar nayak , sudhakar sahoo , sunil pankaj rath , theory and applications of two - dimensional , null - boundary , nine - neighborhood , cellular automata linear rules .corr abs/0804.2346 2008 .p. chattopadhyay , p. p. choudhury , characterisation of a particular hybrid transformation of two - dimensional cellular automata , computers and mathematics with applications , elsevier publication , 38 , 1999 , pp.207 - 216 .k. dihidar , p. p.choudhury , matrix algebraic formulae concerning some special rules of two - dimensional cellular automata , international journal on information sciences , elsevier publication , volume 165 , 2004 , pp.91 - 101 . j. d. gibbons and s. chakraborti , nonparametric statistical inference , new york : marcel dekker , 1992 .li , x. , `` a new measure of image scrambling degree based on grey level difference and information entropy . '' , international conference on computational intelligence and security , ieee,1 , 2008,pp.350 - 354. n. h. packard and s. wolfram , two - dimensional cellular automata , journal of statistical physics , 38 ( 5/6 ) , 1985 , pp.901 - 946 . w. pries , a thanailakis and h. c. card , group properties of cellular automata and vlsi applications , ieee trans on computers c-35 , december 1986 , pp.1013 - 1024 .r , bala suyambu . j , a. arokiaraj , s. saravanan , a study of des algorithm with cellular automata , international journal of innovative management , information and production , volume 3 , number 1 , march 2012 , isme international 2011 , issn 2185 - 5439 , pp.10 - 16 .rukhin , a. , soto , j. , nechvatal , j. , smid , m. , barker , e. , leigh , s. , leveson , m. , banks , d. , heckert , a. , dray , j. , and vo , s. , `` a statistical test suit for random and pseudorandom number generators for cryptographic applications , '' nist special publication , 800 - 22 , 2010 .neumann , the theory of self - reproducing automata , ( edited by a. w. burks ) univ . of illinois press urbana , 1996 .s. wegenkittl , `` entropy estimators and serial tests for ergotic chains . '' , ieee trans .theory , vol .2001 , pp.2480 - 2489 .s.wolfram , statistical mechanics of cellular automata , rev mod phys .55 , july 1983 , pp.601 - 644 .wu , y. , zhou , y. , saveriades , g.,agaian , s. , noonan , j. , and natarajan , p. , `` local shanon entropy measure with statistical tests for image randomness '' , elsevier publication , information sciences 2012 .yue wu , sos agaian and joseph p. noonan , `` a novel method of testing image randomness with applications to image shuffling and encryption '' proc .spie 8755 , mobile multimedia / image processing , security and applications , 2013 . | in this paper we have proposed a new test for pixel randomness using non - parametric method in statistics . in order to validate this new non - parametric test we have designed an encryption scheme based on 2d cellular automata . the strength of the designed encryption scheme is first assessed by standard methods for security analysis and the pixel randomness is then determined by the newly proposed non - parametric method . * keywords : * 2d cellular automata , image encryption , pixel randomness , non - parametric test . |
control problems in multi - agent systems have been attracting attention in diverse contexts . in the consensus problem , for example , the objective is to make all agents converge to some common state by designing proper algorithms - , such as the linear consensus protocol here , is the state of agent and are the components of the laplacian matrix , satisfying for all and .the laplacian is associated with the underlying graph , whose links can be directed and weighted . it can be shown that , if the underlying graph has a spanning tree , then all agents converge to a common number , which depends on the initial values . on the other hand ,if it is desired to steer the system to a prescribed consensus value , auxiliary control strategies are necessary . among these , _ pinning control _is particularly attractive because it is easily realizable by controlling only a few agents , driving them to the desired value through feedback action : where denotes the subset of agents where feedback is applied , with cardinality , is the indicator function ( 1 if and 0 otherwise ) , and is the pinning strength . eq .( [ pin ] ) provides the local strategy that pins a few nodes to stabilize the whole network at a common desired value .the following hypothesis is natural in pinning problems and assumed in this paper . *( h ) * each strongly connected component of without incoming links from the outside has at least one node in .the following result is proved in .[ thm0 ] if ( h ) holds , then system ( [ pin ] ) is asymptotically stable at . in many networked systems ,however , time delays inevitably occur due to limited information transmission speed ; so proposition [ thm0 ] does not apply . in this paperwe consider systems with both transmission and pinning delays , for , where denotes the _ transmission delay _ in the network and is the _ pinning delay _ of the controllers .several recent papers have addressed the stability of consensus systems with various delays .it has been shown that consensus can be achieved under transmission delays if the graph has a spanning tree - .however , if a sufficiently large delay is present also in the self - feedback of the node s own state , then consensus may be destroyed ; similar conclusions also hold in cases of time - varying topologies and heterogeneous delays - .the stability of pinning networks with nonlinear node dynamics have been studied in , .however , the role of pinning delay was considered in only a few papers , where it was argued that stability can be guaranteed if the pinning delays are sufficiently small .precise conditions on the pinning delay for stability , the relation to the network topology , and the selection of pinned nodes have not yet been addressed . in this paper , we study the stability of the model ( [ pintransdelay ] ) under both transmission and pinning delays .first , we derive an estimate of the largest admissible pinning delay .next , we consider several specific scenarios and present numerical algorithms to verify stability by calculating the dominant eigenvalue of the system. included among the scenarios are the cases when only a single node is pinned in the absence of transmission delay , or when the transmission and pinning delays are identical .finally , we use a perturbation approach to estimate the dominant eigenvalue for very small and very large pinning strengths .a directed graph consists of a node set and a link set .a ( directed ) _ path _ of length from node to , denoted , is a sequence of distinct vertices with and such that for .the graph is called strongly connected if there is a directed path from any node to any other node , and it is said to have a spanning tree if there is a node such that for any other node there is a path from to .we denote the imaginary unit by and the identity matrix by . for a matrix , denotes its element and its transpose .the laplacian matrix is associated with the graph in the sense that there is a link from to in if and only if .we denote the eigenvalues of by .recall that zero is always an eigenvalue , with the corresponding eigenvector ^\top ] , and with .system ( [ pintransdelay ] ) can be rewritten as considering solutions in the form with and , the characteristic equation of ( [ pintransdelaym ] ) is obtained as =0.\label{chpintransdelay}\]]the asymptotic stability of ( [ pintransdelaym ] ) is equivalent to all characteristic roots of ( [ chpintransdelay ] ) having negative real parts .the root having the largest real part will be termed as the dominant root or the dominant eigenvalue . for the undelayed case ,proposition [ thm0 ] can be equivalently stated as follows .[ cor0 ] if ( h ) holds , then all eigenvalues of have negative real parts .we also state an easy observation for later use : [ lem0 ] for any two column vectors , first show that the system ( [ pintransdelaym ] ) is stable for all values of the pinning delay smaller than a certain value .[ thm1 ] assume condition ( h ) .let \ ] ] and define if , then system ( [ pintransdelaym ] ) is stable for all .first , we take and prove stability for all .assume for contradiction that there exists some characteristic root of ( [ chpintransdelay ] ) such that .applying the gershgorin disc theorem to , we have for some , which implies ^{2}+[{{\mathrm{im}}}(\lambda^{*})]^{2}\le l_{ii}^{2}. \label{disc1}\ ] ] since , it must be the case that ; i.e. , . then , and since , gives .this , however , contradicts corollary [ cor0 ] .therefore , when , all characteristic roots of ( [ chpintransdelay ] ) have negative real parts .we now let .suppose ( [ chpintransdelay ] ) has a purely imaginary root , . by ( [ disc0 ] ), we have , for some index , implying ^{2}+[\omega - cd_{q } \sin(\omega\tau_{p})]^{2}}\le l_{qq}.\ ] ] thus , we claim that must be a pinned node . forif , then must be zero , which implies that zero is a characteristic root of ( [ chpintransdelay ] ) , contradicting corollary [ cor0 ] .therefore . in the notation of ,the inequality can then be written as . by ( [ taup ] ) , however , we have that for all , and . we conclude that ( [ chpintransdelay ] ) does not have purely imaginary roots for .thus , by ( * ? ? ?* theorem 2.1 ) , all characteristic roots of ( [ chpintransdelay ] ) have strictly negative real parts for .proposition [ thm1 ] provides an estimate for the largest admissible pinning delay for which system ( [ pintransdelaym ] ) is stable .this estimate needs only the knowledge of the set of pinned nodes and their weighted in - degrees .we now consider the possibility of controlling the network using a single node , say , the one .then , where denotes the standard basis vector , whose component is one and other components zero .if is nonsingular , the characteristic equation ( [ chpintransdelay ] ) becomes \nonumber\\ = & \det(\lambda i_{n}+k - a\exp(-\lambda\tau_{r}))\nonumber\\ & \det\left[i_{n}+cu_{q}u_{q}^{\top}(\lambda i_{n}+k - a\exp(-\lambda\tau_{r}))^{-1}\exp(-\lambda\tau_{p})\right]\nonumber\\ = & \det(\lambda i_{n}+k - a\exp(-\lambda\tau_{r}))\nonumber\\ & ( 1+c u_{q}^{\top}(\lambda i_{n}+k - a\exp(-\lambda\tau_{r}))^{-1}u_{q}\exp(-\lambda\tau_{p}))\label{eqnxx1}\end{aligned}\ ] ] using lemma [ lem0 ] .then we have the following result . [ prop3 ]assume ( h ) .if all solutions of the equation satisfy , then system ( [ pintransdelaym ] ) is stable . as in the first part of the proof of proposition [ thm1 ] , the equation =0 ] , , be the left eigenvector of corresponding to the zero eigenvalue .let denote the set of positive solutions of the equation with respect to the variable , where and are given by ( [ ab ] ) .define then system ( [ pintransdelaym ] ) is stable for .( [ eqnxx1 ] ) implies that any purely imaginary solution of ( [ chpintransdelay ] ) should also be a solution of ( [ chsing ] ). then must be a real solution of ( [ eqnx2 ] ) . by the definition of , the solution set of ( [ eqnx2 ] ) with respect to is . by the assumption of irreducibility , for all and . if , then the smallest positive solution of ( [ tt ] ) with respect to is . if , on the other hand , , noting that and , the smallest positive solution of ( [ tt ] ) is again . therefore , given , the smallest nonnegative solution of ( [ tt ] ) with respect to should be in the set . since the mapping is a decreasing function of , the quantity defined in ( [ taupp ] ) is the smallest nonnegative solution of ( [ tt ] ) with respect to , given , for ( [ chsing ] ) does not have any purely imaginary solutions . since for all characteristic roots of ( [ chpintransdelay ] ) have negative real parts , we conclude that all roots have negative real parts for . by derivation ,( [ chsing ] ) is independent of the ordering of the eigenvalues or the eigenvectors in .therefore , the bound for allowable pinning delays given in proposition [ single_pin_thm1 ] does not depend on the choice of the pinned node .proposition [ single_pin_thm1 ] suggests an algorithm to calculate : 1 .find the largest positive solution of the equation 2 .calculate ( [ taupp ] ) .we illustrate this approach in an erds - renyi ( e - r ) random network of nodes with linking probability , where the first node is pinned .the left and right eigenvectors of associated with the zero eigenvalue are given by /\sqrt{n} ] be the left eigenvector of corresponding to the eigenvalue , with .let denote the set of all the branches of the solutions of the equation with respect to the variable .then system ( [ pintransdelaym ] ) is stable whenever the real parts of the numbers are all negative , where is the lambert function .proposition [ single_pin_thm2 ] can be proved by transforming ( [ chsing1 ] ) into ( [ eqnx22 ] ) with and using proposition [ prop3 ] .in this section , we consider the extreme situations when the pinning strength is very small or very large .we will employ the perturbation approach in to approximate the eigenvalues and eigenvectors in terms of .the characteristic roots of ( [ chpintransdelay ] ) are eigenvalues of the matrix .hence , when , the characteristic roots of ( [ chpintransdelay ] ) equal to the eigenvalues of . under the condition ( h ), there is a single eigenvalue .we denote the right and left eigenvectors of by and respectively , with .it can be seen that and ( associated with ) are , respectively , the right and left eigenvectors of associated with the zero laplacian eigenvalue .let denote the characteristic roots of ( [ chpintransdelay ] ) and and denote the right and left eigenvectors of , regarded as functions of , with , and . using a perturbation expansion , where denotes terms that satisfy .thus , \tilde{\phi}^{i}(c)\\ & & = \lambda_{i}(c)\tilde{\phi}^{i}(c).\end{aligned}\ ] ] when is sufficiently small , the dominant eigenvalue is , since is the dominant eigenvalue when .hence , we consider . then . comparing the first - order terms in on both sides , . multiplying both sides with and noting that , hence , we have the following result . [ thm4 ]suppose that the underlying graph is strongly connected and at least one node is pinned .then , for sufficiently small , all characteristic roots of ( [ chpintransdelay ] ) have negative real parts and the dominant root is given by since the graph is strongly connected , has a simple zero eigenvalue . when , the dominant root of ( [ chpintransdelay ] ) is .since the roots of ( [ chpintransdelay ] ) depend analytically on , they are given by for all sufficiently small .substituting ( [ lambda11 ] ) into and noting that completes the proof . in order to understand the meaning of ( [ approxc ] ) , consider the special case of an undirected graph with binary adjacency matrix .then , with ^{\top} ] , we have , which equals the _ average degree _ of the graph . in addition, , which is the _ fraction of pinned agents_. then , ( [ approxc ] ) yields the approximation for small , which uses only the pinning fraction and the mean degree of the graph .since the real part of the dominant characteristic value measures the exponential convergence of the system , proposition [ thm4 ] implies that , for sufficiently small , the convergence rate is improved if the number of pinned nodes is increased , the transmission delay is reduced , or the mean degree is decreased .if the graph is directed , a similar statement can be obtained by taking the components of as weights : . to illustrate this result, we employ a numerical method to calculate the real part of , namely , by simulating the system and expressing its exponential convergence rate in terms of its largest lyapunov exponent . in detail , letting , we partition time into disjoint intervals of length , , and define for ] , ^{\top}$ ] , with , , and corresponding to the pinned subset of dimension .then ( [ large_order_1 ] ) becomes \xi^{i}_{2 } & = \mu^{1}_{i}\xi^{i}_{2}\\ \exp(-\mu^{1}_{i}\tau_{r } ) a_{12}\xi^{i}_{2}-\xi^{i,1}_{1 } & = 0 .\end{cases } \label{large_order_1x}\ ] ] we have the following result . [ thm5 ] suppose that the underlying graph is strongly connected and at least one node is pinned .fix , and suppose as . then the dominant root of ( [ large_tau_ch ] ) has the form where is the dominant eigenvalue of the delay - differential equation furthermore , for all sufficiently large .the condition implies that , when , the dominant root of the characteristic equation ( [ large_tau_ch ] ) is zero and corresponds to the eigenspace .so , for sufficiently small , the dominant root of equation ( [ large_tau_ch ] ) and the corresponding eigenvector have the form ( [ large_perturb ] ) , where satisfies the first equation in ( [ large_order_1x ] ) , i.e. , is an eigenvalue of ( [ large_c_equ ] ) . since , ( [ lambda_large_c ] ) follows .moreover , since is diagonally dominant , one can see that under condition ( h ) .therefore , for sufficiently large , all characteristic values of system ( [ pintransdelay ] ) have negative real parts .we note that depends only on the coupling structure of the uncoupled nodes . to illustrate this result , we consider examples with a similar setup as in sec . [ small_pinning ] .we take an e - r graph with nodes and linking probability , and pin nodes .we set and .the real part of the dominant characteristic root of ( [ chpintransdelay ] ) is numerically calculated via the largest lyapunov exponent , using formula ( [ le ] ) .its theoretical estimation comes from theorem [ thm5 ] : , where the largest real part of is similarly calculated from the largest lyapunov exponent of ( [ large_c_equ ] ) .[ fig5 ] shows that as grows large , the real part of the dominant root of ( [ chpintransdelay ] ) obtained from simulations approach the theoretical result , thus verifying proposition [ thm5 ] .we have shown in this paper that the stability of the multi - agent systems with a local pinning strategy and transmission delay may be destroyed by sufficiently large pinning delays .using theoretical and numerical methods , we have obtained an upper - bound for the delay value such that the system is stable for any pinning delay less than this bound . in this case , the exponential convergence rate of the multi - agent , which equals the smallest nonzero real part of the eigenvalues of the characteristic equation , measures the control performance .99 m. h. degroot , reaching a consensus , j. amer .assoc . , 69 ( 1974 ) , 118121 . c. reynolds , flocks , herds , and schools : a distributed behavioral model , comput ., 21:4 ( 1987 ) , 2534 .t. vicsek , a. czirk , e. ben - jacob , i. cohen and o. shochet , novel type of phase transition in a system of self - driven particles , phys .lett . , 75 ( 1995 ) , 12261229 . t. p. chen , x. w. liu , w. l. lu .pinning complex networks by a single controller .ieee trans .circuits syst .i , 54:6 ( 2007 ) , 13171326 .w. l. lu , x. li , z. h. rong .global stabilization of complex networks with digraph topologies via a local pinning algorithm .automatica , 46 ( 2010 ) , 116121 . q. song , f. liu , j. cao , w. yu , m - matrix strategies for pinning - controlled leader - following consensus in multiagent systems with nonlinear dynamics , ieee trans .cybern . , 43:6 ( 2013 ) , 16881697 .r. olfati - saber and r. m. murray , consensus problems in networks of agents with switching topology and time - delays , ieee trans .control , 49 ( 2004 ) , 15201533 .f. m. atay , consensus in networks under transmission delays and the normalized laplacian .a , 371 ( 2013 ) , 20120460 . f. m. atay , on the duality between consensus problems and markov processes , with application to delay systems .markov processes and related fields ( in press ) . p .- a .bliman and g. ferrari - trecate , average consensus problems in networks of agents with delayed communications , automatica , 44 ( 2008 ) , 19851995 .l. moreau , stability of multiagent systems with time - dependent communication links , ieee trans .control , 50 ( 2005 ) , 169182 .f. xiao and l. wang , asynchronous consensus in continuous - time multi - agent systems with switching topology and time - varying delays , ieee trans .control , 53 ( 2008 ) , 18041816 .w. l. lu , f. m. atay , j. jost , consensus and synchronization in discrete - time networks of multi - agents with stochastically switching topologies and time delays , networks and heterogeneous media , 6:2(2011 ) , 329349 .m. ulrich , a. papachristodoulou , f. allgwer , generalized nyquist consensus condition for linear multi - agent systems with heterogeneous delays , proc .ifac workshop on estimation and control of networked systems , 2429 , 2009 .s. ruan , j. wei . on the zeros of transcendental functions with applications to stability of delay differential equations with two delays .. discrete impuls .systems , ser .a 10 ( 2003 ) , 863874 . | we study the stability of networks of multi - agent systems with local pinning strategies and two types of time delays , namely the transmission delay in the network and the pinning delay of the controllers . sufficient conditions for stability are derived under specific scenarios by computing or estimating the dominant eigenvalue of the characteristic equation . in addition , controlling the network by pinning a single node is studied . moreover , perturbation methods are employed to derive conditions in the limit of small and large pinning strengths . numerical algorithms are proposed to verify stability , and simulation examples are presented to confirm the efficiency of analytic results . |
media - based modulation ( mbm ) , a promising modulation scheme for wireless communications in multipath fading environments , is attracting recent research attention - .the key features that make mbm different from conventional modulation are : mbm uses digitally controlled parasitic elements external to the transmit antenna that act as radio frequency ( rf ) mirrors to create different channel fade realizations which are used as the channel modulation alphabet , and it uses indexing of these rf mirrors to convey additional information bits .the basic idea behind mbm can be explained as follows .placing rf mirrors near a transmit antenna is equivalent to placing scatterers in the propagation environment close to the transmitter .the radiation characteristics of each of these scatterers ( i.e. , rf mirrors ) can be changed by an on / off control signal applied to it .an rf mirror reflects back the incident wave originating from the transmit antenna or passes the wave depending on whether it is off or on .the on / off status of the mirrors is called as the ` mirror activation pattern ( map ) ' .the positions of the on mirrors and off mirrors change from one map to the other , i.e. , the propagation environment close to the transmitter changes from one map to the other map .note that in a rich scattering environment , a small perturbation in the propagation environment will be augmented by many random reflections resulting in an independent channel .the rf mirrors create such perturbations by acting as controlled scatterers , which , in turn , create independent fade realizations for different maps . if is the number of rf mirrors used , then maps are possible .if the transmitted signal is received through receive antennas , then the collection of -length complex channel gain vectors form the mbm channel alphabet. this channel alphabet can convey information bits through map indexing .if the antenna transmits a symbol from a conventional modulation alphabet denoted by , then the spectral efficiency of mbm is bits per channel use ( bpcu ) .an implementation of a mbm system consisting of 14 rf mirrors placed in a compact cylindrical structure with a dipole transmit antenna element placed at the center of the cylindrical structure has been reported in .early reporting of the idea of using parasitic elements for index modulation purposes ( in the name ` aerial modulation ' ) can be found in , .mbm has been shown to possess attractive performance attributes , particularly when the number of receive antennas is large - .specifically , mbm with rf mirrors and receive antennas over a multipath channel has been shown to asymptotically ( as ) achieve the capacity of parallel awgn channels .this suggests that mbm can be attractive for use in massive mimo systems which typically employ a large number of receive antennas at the bs .however , the literature on mbm so far has focused mainly on single - user ( point - to - point ) communication settings .our first contribution in this paper is that , we report mbm in multiuser massive mimo settings and demonstrate significant performance advantages of mbm compared to conventional modulation . for example , a bit error performance achieved using 500 receive antennas at the bs in a massive mimo system using conventional modulation can be achieved using just 128 antennas with multiuser mbm .even multiuser spatial modulation ( sm ) and generalized spatial modulation ( gsm ) - in the same massive mimo settings require more than 200 antennas to achieve the same bit error performance .this suggests that multiuser mbm can be an attractive scheme for use in the uplink of massive mimo systems .the second contribution relates to exploitation of the inherent sparsity in multiuser mbm signal vectors for low - complexity signal detection at the bs receiver .we resort to compressive sensing ( cs ) based sparse recovery algorithms for this purpose .several efficient sparse recovery algorithms are known in the literature- .we propose a multiuser mbm signal detection scheme that employs greedy sparse recovery algorithms like orthogonal matching pursuit ( omp) , compressive sampling matching pursuit ( cosamp ) , and subspace pursuit ( sp) .simulation results show that the proposed detection scheme using sp achieves very good performance ( e.g. , significantly better performance compared to mmse detection ) at low complexity .this demonstrates that cs based sparse signal recovery approach is a natural and efficient approach for multiuser mbm signal detection in massive mimo systems .the rest of the paper is organized as follows .the multiuser mbm system model is introduced in sec .the performance of multiuser mbm with maximum likelihood detection is presented in sec .[ sec3 ] . the proposed sparsity - exploiting detection scheme for multiuser mbm signal detection and its performance in massive mimo systemsare presented in sec .conclusions are presented in sec .consider a massive mimo system with uplink users and a bs with receive antennas ( see fig .[ mbm_mimo ] ) , where is in the tens ( e.g. , ) and is in the hundreds ( ) .the users employ mbm for signal transmission .each user has a single transmit antenna and rf mirrors placed near it . in a given channel use , each user selects one of the mirror activation patterns ( maps ) using information bits .a mapping is done between the combinations of information bits and the maps .an example mapping between information bits and maps is shown in table [ table1 ] for .the mapping between the possible maps and information bits is made known a priori to both transmitter and receiver for encoding and decoding purposes , respectively ..mapping between information bits and maps for . [ cols="^,^,^",options="header " , ] apart from the bits conveyed through the choice of a map in a given channel use as described above , a symbol from a modulation alphabet ( e.g. , qam , psk ) transmitted by the antenna conveys an additional bits .therefore , the spectral efficiency of a -user mbm system is given by for example , a multiuser mbm system with , , and 4-qam has a system spectral efficiency of 16 bpcu .an important point to note here is that the spectral efficiency per user increases linearly with the number of rf mirrors used at each user . to introduce the multiuser mbm signal set and the corresponding received signal vector at the bs ,let us first formally introduce the single - user mbm signal set .the mbm channel alphabet of a single user is the set of all channel gain vectors corresponding to the various maps of that user .let us define , where is the number of possible maps corresponding to rf mirrors .let denote the channel gain vector corresponding to the map of the user , where ^t ] denote the vector comprising of the transmit mbm signal vectors from all the users .let denote the channel gain matrix given by ] , and is the channel gain vector of the user corresponding to map as defined before .the multiuser received signal vector at the bs is then given by where is the awgn noise vector with .in this section , we analyze the ber performance of multiuser mbm under maximum likelihood ( ml ) detection .we obtain an upper bound on the ber which is tight at moderate to high snrs .we also present a comparison between the ber performance of multiuser mbm and those of other multiuser schemes that employ conventional modulation , spatial modulation , and generalized spatial modulation .the ml detection rule for the multiuser mbm system model in ( [ sys ] ) is given by which can be written as the pairwise error probability ( pep ) that the receiver decides in favor of the signal vector when was transmitted , given the channel matrix can be written as defining , we observe that .therefore , we can write where . the conditional pep expression in can be written as where and are entries of and , respectively , and is the column of .the argument of in has the central -distribution with degrees of freedom .the computation of the unconditional peps requires the expectation of with respect to , which can be obtained as follows : \nonumber \\ & = f(\alpha)^{n_r } \sum_{i=0}^{n_r-1 } \binom{n_r-1+i}{i } ( 1-f(\alpha))^i , \end{aligned}\ ] ] where , , and .now , an upper bound on the bit error probability using union bound can be obtained as where is the hamming distance between the bit mappings corresponding to and .we evaluated the ber performance of multiuser mbm ( mu - mbm ) using the ber upper bound derived above as well as simulations .for the purpose of initial comparisons with other systems , we consider a mu - mbm system with , , , bpsk , and 4 bpcu per user .let and denote the number transmit antennas and transmit rf chains , respectively , at each user .note that in the considered mu - mbm system , each user uses one transmit antenna and one transmit rf chain , i.e. , .we compare the performance of the above mu - mbm system with those of three other multiuser systems which use conventional modulation ( cm ) , spatial modulation ( sm ) , and generalized spatial modulation ( gsm ) .the multiuser system with conventional modulation ( mu - cm ) uses at each user and employs 16-qam to achieve the same spectral efficiency of 4 bpcu per user .the multiuser system with sm ( mu - sm ) uses , , and 8-qam , achieving a spectral efficiency of bpcu per user .the multiuser system with gsm ( mu - gsm ) uses , , and bpsk , achieving a spectral efficiency of bpcu per user .figure [ ml ] shows the ber performance of the mu - mbm , mu - cm , mu - sm , and mu - gsm systems described above .first , it can be observed that the analytical upper bound is very tight at moderate to high snrs .next , in terms of performance comparison between the considered systems , the following inferences can be drawn from fig .[ ml ] . *the mu - mbm system achieves the best performance among all the four systems considered .for example , mu - mbm performs better by about 5 db , 4 db , 2.5 db compared to mu - cm , mu - sm , and mu - gsm systems , respectively , at a ber of . *the better performance of mu - mbm can be attributed to more bits being conveyed through mirror indexing , which allows mu - mbm to use lower - order modulation alphabets ( bpsk ) compared to other systems which may need higher - order alphabets ( 8-qam , 16-qam ) to achieve the same spectral efficiency . * mu - mbm performs better than mu - gsm though both use bpsk in this example .this can be attributed to the good distance properties of the mbm signal set . , , 4 bpcu per user , and ml detection .analysis and simulations.,width=340,height=236 ] note that though the results in fig .[ ml ] illustrate the performance superiority of mu - mbm over mu - cm , mu - sm , and mu - gsm , they are presented only for a small system with and .this is because ml detection is prohibitively complex for systems with large and ( ml detection is exponentially complex in ) .however , massive mimo systems are characterized by in the tens and in the hundreds . therefore , low - complexity detection schemes which scale well for such large - scale mu - mbm systems are needed . to address this need , we resort to exploiting the inherent sparse nature of the mbm signal vectors , and devise a compressive sensing based detection algorithm in the following section .it is evident from the example signal set in that the mbm signal vectors are inherently sparse .an mbm signal vector has only one non - zero element out of elements , leading to a sparsity factor of .for example , consider an mbm signal set with and . out of 16 elements in a signal vector ,only one element is non - zero resulting in a sparsity factor of .exploitation of this inherent sparsity to devise detection algorithms can lead to efficient signal detection at low complexities .accordingly , we propose a low - complexity mu - mbm signal detection scheme that employs compressive sensing based sparse reconstruction algorithms like omp , cosamp , and sp . we first model the mu - mbm signal detection problem as a sparse reconstruction problem and then employ greedy algorithms for signal detection .sparse reconstruction is concerned with finding an approximate solution to the following problem : where is called the measurement matrix , is the complex input signal vector , is the noisy observation corresponding to the input signal , and is the complex noise vector . the mu - mbm signal detection problem at the bs in ( [ sys ] ) can be modeled as a sparse recovery problem in , with the measurement matrix being the channel matrix , the noisy observation being the received signal vector , and the input being the mu - mbm transmit signal vector .the noise vector is additive complex gaussian with .greedy algorithms achieve sparse reconstruction in an iterative manner .they decompose the problem of sparse recovery into a two step process ; recover the support of the sparse vector first , and then obtain the non - zero values over this support .for example , omp starts with an initial empty support set , an initial solution , and an initial residue . in each step ,omp updates one coordinate of the vector based on the correlation values between the residue vector and the columns of the matrix . in the iteration , an element given by added to the support set , where is the column of , and and are the support set and residue after iterations , respectively .the entries of corresponding to the obtained support set are computed using least squares .this process is iterated till the stopping criteria is met .the stopping criteria can be either a specified error threshold or a specified level of sparsity . in the sp algorithm , instead of updating one coordinate of at a time as in omp, coordinates are updated at once .the major difference between omp and sp is the following . in omp ,the support set is generated sequentially .it starts with an empty set and adds one element in every iteration to the existing support set .an element added to the support set can not be removed until the algorithm terminates .in contrast , sp provides flexibility of refining the support set in every iteration .cosamp is similar to sp except that it updates coordinates in each iteration to the support set instead of updating coordinates as in sp .cosamp and sp have superior reconstruction capability comparable to convex relaxation methods , .* algorithm 1 * shows the listing of the pseudo - code of the proposed sparsity - exploiting detection algorithm for mu - mbm signals .inputs : initialize : * repeat * * if * * for * * end for * * break * ; * else * * end if * * until * output : the estimated mu - mbm signal vector ^t\ ] ] [ alg1 ] sr in * algorithm 1 * denotes the sparse recovery algorithm , which can be any one of omp , cosamp , and sp . the signal vector reconstructed by the sparse recovery algorithm is denoted by . detecting the mu - mbm signal vectorinvolves detecting the mbm signal vector transmitted by each user .an mbm signal vector from a user has exactly one non - zero entry out of entries as observed in the example mbm signal set in .hence , sr is expected to reconstruct a mu - mbm signal vector such that the mbm signal sub - vector corresponding to a given user has only one non - zero entry . but this constraint on the expected support set is not built in the general sparse recovery algorithms . in general , a sparse recovery algorithm can output non - zero elements at any of the locations of . to overcome this issue, we define user activity pattern ( uap ) , denoted by , as a -length vector with entry as if there is at least one non - zero entry in the user s recovered mbm signal vector , and otherwise .a valid reconstructed signal vector is one which has all ones in .sr is used multiple times with a range of sparsity estimates starting from ( in the algorithm listing ) till the valid uap is obtained ( i.e. , till the algorithm reconstructs at least one non - zero entry for each user s mbm signal vector ) . in the algorithm listing , denotes the uap at the iteration . on recovering an with valid uap, the mbm signal vector of each user is mapped to the nearest ( in the euclidean sense ) mbm signal vector in .this is shown in the step 8 in the algorithm listing , where denotes the recovered mbm signal vector of the user and denotes the mbm signal vector to which gets mapped to .finally , the mu - mbm signal vector is obtained by concatenating the detected mbm signal vectors of all the users , i.e. , , ^t$ ] .the decoding of information bits from the detected mbm signal vector of a given user involves decoding of mirror index bits and qam symbol bits of that user .the mirror index bits are decoded from the map of the detected mbm signal vector and the qam bits are decoded from the detected qam symbol . in this subsection, we present the ber performance of mu - mbm systems in a massive mimo setting ( i.e. , in the tens and in the hundreds ) when the proposed * algorithm 1 * is used for mu - mbm signal detection at the bs . in the same massive mimo setting, we evaluate the performance of other systems that use conventional modulation ( mu - cm ) , spatial modulation ( mu - sm ) , and generalized spatial modulation ( mu - gsm ) , and compare them with the performance achieved by mu - mbm . the proposed * algorithm 1 *is also used for the detection of mu - sm and mu - gsm .it is noted that the mu - sm and mu - gsm signal vectors are also sparse to some extent ; the sparsity factors in mu - sm and mu - gsm are and , respectively .so the use of the proposed algorithm for detection of these signals is also appropriate .ml detection is used to detect mu - cm signals ( this is possible for mu - cm with sphere decoding for , i.e. , 32 real dimensions ) ._ mu - mbm performance using proposed algorithm : _ figure [ fig : sparsealgos ] shows the performance of mu - mbm system using the proposed algorithm with omp , cosamp , and sp . mmse detection performance is also shown for comparison .a massive mimo system with and is considered .each user uses , , , and 4-qam .this results in a spectral efficiency of 8 bpcu per user , and a sparsity factor of . from fig .[ fig : sparsealgos ] , we observe that the proposed algorithm with omp , cosamp , and sp achieve significantly better performance compared to mmse . amongthe the use of omp , cosamp , and sp in the proposed algorithm , use of sp gives the best performance .this illustrates the superior reconstruction / detection advantage of the proposed algorithm with sp .we will use the proposed algorithm with sp in the subsequent performance results figures .it is noted that the complexity of proposed algorithm is also quite favorable ; the complexity of the proposed algorithm with sp and that of mmse are and , respectively . , , , , , 4-qam , 8bpcu per user , using the proposed detection algorithm .mmse detection performance is also shown for comparison.,width=340,height=236 ] _ performance of mu - mbm , mu - sm , mu - gsm : _ figure [ fig : spdetection ] shows a ber performance comparison between mu - mbm , mu - cm , mu - sm , and mu - gsm in a massive mimo setting with and .the proposed algorithm with sp is used for detection in mu - mbm , mu - sm , and mu - gsm .ml detection is used for mu - cm .the spectral efficiency is fixed at 5 bpcu per user for all the four schemes .mu - mbm achieves this spectral efficiency with , , , and 4-qam .mu - cm uses , , and 32-qam to achieve 5 bpcu per user . to achieve the same 5 bpcu per user ,mu - sm uses , , and 8-qam , and mu - gsm uses , , and bpsk .the sparsity factors in mu - mbm , mu - sm , and mu - gsm are , , and , respectively .it can be seen that , mu - mbm clearly outperforms mu - cm , mu - sm , and mu - gsm .for example , at a ber of , mu - mbm outperforms mu - cm , mu - gsm , and mu - sm by about 7 db , 5 db , and 4 db , respectively .the performance advantage of mu - mbm can be mainly attributed to its better signal distance properties .mu - mbm is also benefited by its lower sparsity factor as well as the possibility of using lower - order qam size because of additional bits being conveyed through indexing mirrors . , , and 5 bpcu per user.,width=340,height=236 ] [ fig : spdetection ] _ effect of number of bs receive antennas _ :figure [ fig : effectofnr ] shows an interesting result which demonstrates mu - mbm s increasing performance gain compared to mu - cm , mu - sm , and mu - gsm as the number of bs receive antennas is increased .a massive mimo system with and 5 bpcu per user is considered .the parameters of the four schemes are the same as those in fig .[ fig : spdetection ] except that here snr is fixed at 4 db and is varied from 48 to 624 .it is interesting to observe that a performance that could be achieved using 500 antennas at the bs in a massive mimo system that uses conventional modulation ( ber for mu - cm at with ml detection ) can be achieved using just 128 antennas when mu - mbm is used ( ber for mu - mbm at with proposed detection ) .mu - sm and mu - gsm also achieve better performance compared to mu - cm , but they too require more than 200 antennas to achieve the same ber . this increasing performance advantage of mu - mbm for increasing can be mainly attributed to its better signal distance properties particularly when is large .this indicates that multiuser mbm can be a very good scheme for use in the uplink of massive mimo systems . in a massive mimo setting with , 5 bpcu per user , and snr = 4 db.,width=340,height=236 ]we investigated the use of media - based modulation ( mbm ) , a recent and attractive modulation scheme that employs rf mirrors ( parasitic elements ) to convey additional information bits through indexing of these mirrors , in massive mimo systems .our results demonstrated significant performance advantages possible in multiuser mbm compared to multiuser schemes that employ conventional modulation , spatial modulation , and generalized spatial modulation .motivated by the possibility of exploiting the inherent sparsity in multiuser mbm signal vectors , we proposed a detection scheme based on compressive sensing algorithms like omp , cosamp , and subspace pursuit .the proposed detection scheme was shown to achieve very good performance ( e.g. , significantly better performance compared to mmse detection ) at low complexity , making it suited for multiuser mbm signal detection in massive mimo systems .channel estimation , effect of imperfect knowledge of the channel alphabet at the receiver , and effect of spatial correlation are interesting topics for further investigation .o. n. alrabadi , a. kalis , c. b. papadias , and r. prasad , `` a universal encoding scheme for mimo transmission using a single active element for psk modulation schemes , '' _ ieee trans .wireless commun .5133 - 5142 , oct . 2009 . m.di renzo , h. haas , a. ghrayeb , s. sugiura , and l. hanzo , `` spatial modulation for generalized mimo : challenges , opportunities and implementation , '' _ proc . of the ieee _ , vol .56 - 103 , jan . 2014 .j. wang , s. jia , and j. song , `` generalised spatial modulation system with multiple active transmit antennas and low complexity detection scheme , '' _ ieee trans .wireless commun_. , vol .1605 - 1615 , apr .2012 .d. donoho , y. tsaig , i. drori , and j .- l .starck , `` sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit , '' _ ieee trans .inform . theory _2 , pp . 1094 - 1121 , feb .2012 .m. s. alouini and g. goldsmith , `` a unified approach for calculating error rates of of linearly modulated signals over generalized fading channels , '' _ ieee trans .9 , pp . 1324 - 1334 , sep . | in this paper , we consider _ media - based modulation ( mbm ) _ , an attractive modulation scheme which is getting increased research attention recently , for the uplink of a massive mimo system . each user is equipped with one transmit antenna with multiple radio frequency ( rf ) mirrors ( parasitic elements ) placed near it . the base station ( bs ) is equipped with tens to hundreds of receive antennas . mbm with rf mirrors and receive antennas over a multipath channel has been shown to asymptotically ( as ) achieve the capacity of parallel awgn channels . this suggests that mbm can be attractive for use in massive mimo systems which typically employ a large number of receive antennas at the bs . in this paper , we investigate the potential performance advantage of multiuser mbm ( mu - mbm ) in a massive mimo setting . our results show that multiuser mbm ( mu - mbm ) can significantly outperform other modulation schemes . for example , a bit error performance achieved using 500 receive antennas at the bs in a massive mimo system using conventional modulation can be achieved using just 128 antennas using mu - mbm . even multiuser spatial modulation , and generalized spatial modulation in the same massive mimo settings require more than 200 antennas to achieve the same bit error performance . also , recognizing that the mu - mbm signal vectors are inherently sparse , we propose an efficient mu - mbm signal detection scheme that uses compressive sensing based reconstruction algorithms like orthogonal matching pursuit ( omp ) , compressive sampling matching pursuit ( cosamp ) , and subspace pursuit ( sp ) . |
what do reported uncertainties actually tell us about the accuracy of scientific measurements and the likelihood that different measurements will disagree ?no scientist expects different research studies to always agree , but the frequent failure of published research to be confirmed has generated much concern about scientific reproducibility .when scientists investigate many quantities in very large amounts of data , interesting but ultimately false results may occur by chance and are often published . in particle physics ,bitter experience with frequent failures to confirm such results eventually led to an ad hoc `` 5-sigma '' discovery criterion , i.e. a `` discovery '' is only taken seriously if the estimated probability for observing the result without new physics is less than the chance of a single sample from a normal distribution being more than five standard deviations ( `` '' ) from the mean . in other fields , arguments that most novel discoveries are false caused increased emphasis on reporting the value and uncertainty of measured quantities , not just whether the value is statistically different from zero .research confirmation is then judged by how well independent studies agree according to their reported uncertainties , so assessing reproducibility requires accurate evaluation and realistic understanding of these uncertainties .this understanding is also required when analysing data , combining studies in meta - analyses , or making scientific , business , or policy judgments based on research .the experience of research fields such as physics , where values and uncertainties have long been regularly reported , may provide some guidance on what reproducibility can reasonably be expected . most recent investigations into reproducibility focus onhow often observed effects disappear in subsequent research , revealing strong selection bias in published results .removing such bias is extremely important , but may not reduce the absolute number of false discoveries since not publishing non - significant results does not make the `` discoveries '' go away . controlling the rate of false discoveries depends on establishing criteria that reflect real measurement uncertainties , especially the likelihood of extreme fluctuations and outliers .outliers are observations that disagree by an abnormal amount with other measurements of the same quantity . despite every scientist knowing that the rate of outliers is always greater than naively expected , there is no widely accepted heuristic for estimating the size or shape of these long tails .these estimates are often assumed to be approximately normal ( gaussian ) , but it is easy to find examples where this is clearly untrue . to examine the accuracy of reported uncertainties ,this paper reviews multiple published measurements of many different quantities , looking at the differences between measurements of each quantity normalized by their reported uncertainties .previous similar studies reported on only a few hundred to a few thousand measurements , mostly in subatomic physics .this study reports on how well multiple measurements of the same quantity agree , and hence what are reasonable expectations for the reproducibility of published scientific measurements .of particular interest is the frequency of large disagreements which usually reflect unexpected systematic effects .sources of uncertainty are often categorized as statistical or systematic , and their methods of evaluation classified as type a or b .type a evaluations are based on observed frequency distributions ; type b evaluations use other methods .statistical uncertainties are always evaluated from primary data using type a methods , and can in principle be made arbitrarily small by repeated measurement or large enough sample size .uncertainties due to systematic effects may be evaluated by either type a or b methods , and fall into several overlapping classes .class 1 systematics , which include many calibration and background uncertainties , are evaluated by type a methods using ancillary data .class 2 systematics are almost everything else that might bias a measurement , and are caused by a lack of knowledge or uncertainty in the measurement model , such as the reading error of an instrument or the uncertainties in monte carlo estimates of corrections to the measurement .class 3 systematics are theoretical uncertainties in the interpretation of a measurement . for example , determining the proton radius using the lamb shift of muonic hydrogen requires over 20 theoretical corrections that are potential sources of uncertainty in the proton radius , even if the actual measurement of the lamb shift is perfect .the uncertainties associated with class 2 and 3 systematic effects can not be made arbitrarily small by simply getting more data . when considering the likelihood of extreme fluctuations in measurements , mistakes and `` unknown unknowns '' are particularly important , but they are usually assumed to be statistically intractable and are not often considered in traditional uncertainty analysis .mistakes are `` unknown knowns '' , i.e. something that is thought to be known but is not , and it is believed that good scientists should not make mistakes .`` unknown unknowns '' are factors that affect a measurement but are unknown and unanticipated based on past experience and knowledge . for example , during the first 5 years of operation of lep ( the large electron positron collider ) , the effect of local railway traffic on measurements of the boson mass was an `` unknown unknown '' that no - one thought about .then improved monitoring revealed unexpected variations in the accelerator magnetic field , and after much investigation these variations were found to be caused by electric rail line ground leakage currents flowing through the lep vacuum pipe .in general , systematic effects are challenging to , but can be partially constrained by researchers making multiple internal and external consistency checks : is the result compatible with previous data or theoretical expectations ?is the same result obtained for different times , places , assumptions , instruments , or subgroups ?as described by dorsey , scientists `` change every condition that seems by any chance likely to affect the result , and some that do not , in every case pushing the change well beyond any that seems at all likely '' .if an inconsistency is observed and its cause understood , the problem can often be fixed and new data taken , or the effect monitored and corrections made . if the cause can not be identified , however , then the observed dispersion of values must be included in the uncertainty .the existence of unknown systematic effects or mistakes may be revealed by consistency checks , but small unknown systematics and mistakes are unlikely to be noticed if they do not affect the measurement by more than the expected uncertainty .even large problems can be missed by chance ( see sec . [ssec : modelling ] ) or if the conditions changed between consistency checks do not alter the size of the systematic effect .the power of consistency checks is limited by the impossibility of completely changing all apparatus , methods , theory , and researchers between measurements , so one can never be certain that all significant systematic effects have been identified .quantities were only included in this study if they are significant enough to have warranted at least five independent measurements with clearly stated uncertainties .medical and health research data were extracted from some of the many meta - analyses published by the cochrane collaboration ; a total of 5580 measurements of 310 quantities generating 99433 comparison pairs were included . particle physics data ( 8469 measurements , 864 quantities , 53988 pairs ) were retrieved from the review of particle physics . nuclear physics data ( 12380 measurements , 1437 quantities , 66677 pairs )were obtained from the table of radionuclides .most nuclear and particle physics measurements have prior experimental or theoretical expectations which may influence results from nominally independent experiments , and medical research has similar biases , so this study also includes a large sample of interlaboratory studies that do not have precise prior expectations for their results . in these studies , multiple independent laboratoriesmeasure the same quantity and compare results .for example , the same mass standard might be measured by national laboratories in different countries , or an unknown archaeological sample might be divided and distributed to many labs , with each lab reporting back its carbon-14 measurement of the sample s age .none of the laboratories knows the expected value for the quantity nor the results from other labs , so there should be no expectation , selection , or publication biases .these interlab studies ( 14097 measurements , 617 quantities , 965416 pairs ) were selected from a wide range of sources in fields such as analytical chemistry , environmental sciences , metrology , and toxicology .the measurements ranged from genetic contamination of food to high precision comparison of fundamental physical standards , and were carried out by a mix of national , university , and commercial laboratories . all quantities analysed are listed in the supplementary materials .data were entered using a variety of semi - automatic scripts , optical - character recognition , and manual methods .no attempt was made to recalculate past results based on current knowledge , or to remove results that were later retracted or amended , since the original paper was the best result at the time it was published .when the review of particle physics noted that earlier data had been dropped , the missing results were retrieved from previous editions . to ensure that measurements were as independent as possible , measurements were excluded if they were obviously not independent of other data already included . because relationships between measurements are often obscure , however , there undoubtedly remain many correlations between the published results used .medical and health data were selected from the 8105 reviews in the cochrane database as of 25 september 2013 .data were analysed from 221 intervention reviews whose abstract mentioned trials with total participants , and which reported at least one analysis with studies with total participants . the average heterogeneity inconsistency index ( ) is about 40% for the analyses reported here .because analyses within a review may be correlated , only a maximum of 3 analyses and 5 comparison groups were included from any one review .about 80% of the cochrane results are the ratio of intervention and control binomial probabilities , e.g. mortality rates for a drug and a placebo .such ratios are not normal , so they were converted to differences that should be normal in the gaussian limit , i.e. when the group size and probability are such that , , and are all , so the binomial distribution converges towards a gaussian distribution .( the median observed values for these data were , . )the 68.3% binomial probability confidence interval was calculated for both the intervention and control groups to determine the uncertainties .measurements with uncertainties are typically reported as , which means that the interval to contains with some defined probability `` the values that could reasonably be attributed to the measurand '' .most frequently , uncertainty intervals are given as , where is the coverage factor and is the `` standard uncertainty '' , i.e. the uncertainty of a measurement expressed as the standard deviation of the expected dispersion of values .uncertainties in particle physics and medicine are often instead reported as the bounds of either 68.3% or 95% confidence intervals , which for a normal distribution are equivalent to the and standard uncertainty intervals . for this study , all uncertainties were converted to nominal 68.3% confidence interval uncertainties .the vast majority of measurements reported simple single uncertainties , but if more than a single uncertainty was reported , e.g. `` statistical '' and `` systematic '' , they were added in quadrature . all measurements , , of a given quantity were combined in all possible pairs and the difference between the two measurements of each pair calculated in units of their combined uncertainty : the dispersion of values can be used to judge whether independent measurements of a quantity are `` compatible '' .a feature of as a metric for measurement agreement is that it does not require a reference value for the quantity .( the challenges and effects of using reference values are discussed in section [ ssec : alternate ] . ) the uncertainties in equation [ eq : zdefinition ] are combined in quadrature , as expected for standard uncertainties of independent measurements .( the effects of any lack of independence are discussed in section [ ssec : uncertaintyschemes ] . )uncertainties based on confidence intervals may not be symmetric about the reported value , which is the case for about 13% of particle , 6% of medical , 0.3% of nuclear , and 0.06% of interlab measurements .following common ( albeit imperfect ) practice , if the reported plus and minus uncertainties were asymmetric , was calculated from eq .[ eq : zdefinition ] using the uncertainty for the side towards the other member of the comparison pair .for example , if , , and , then and .the distributions of the differences are histogrammed in fig .[ fig : differential ] , with each pair weighted such that the total weight for a quantity is the number of measurements of that quantity .for example , if a quantity has 10 measurements , there are 45 possible pairs , and each entry has a weight of 10/45 .( other weighting schemes are discussed in section [ ssec : alternate ] . )the final frequency distribution within each research area is then normalized so that its total observed probability adds up to 1 . if the measurement uncertainties are well evaluated and correspond to normally distributed probabilities for , then is expected to be normally distributed with a standard deviation .probability distribution uncertainties ( e.g. the vertical error bars in fig . [fig : differential ] ) were evaluated using a bootstrap monte carlo method where quantities were drawn randomly with replacement from the actual data set until the number of monte carlo quantities equaled the actual number of quantities .the resulting artificial data set was histogrammed , the process repeated 1000 times , and the standard deviations of the monte carlo probabilities calculated for each bin . random selection of measurements instead of quantities was not chosen for uncertainty evaluation because of the corrections then required to avoid bias and artifacts .for example , if measurements are randomly drawn , a quantity with only 5 measurements will often be missing from the artificial data set for having too few ( ) measurements drawn , or if it does have 5 measurements some of them will be duplicates generating unrealistic values , or if duplicates are excluded then they will always be the same 5 nonrandom measurements . without correcting for such effects , the resulting measurement monte carlo generated uncertainties are too small to be consistent with the observed bin - to - bin fluctuations in fig . [fig : differential ] . correcting for such effectsrequires using characteristics of the actual quantities and would be effectively equivalent to using random quantities .attempts were made to fit the data to a wide variety of functions , but by far the best fits were to non - standardized student - t probability density distributions with degrees of freedom . student - t distribution is essentially a smoothly symmetric normalizable power - law , with for .the fitted parameter defines the core width and overall scale of the distribution and is equal to the standard deviation in the gaussian limit and to the half - width at half maximum in the cauchy ( also known as lorentzian or breit - wigner ) limit .the parameter determines the size of the tails , with small corresponding to large tails .the values and standard uncertainties in and were determined from a non - linear least squares fit to the data that minimizes the nominal : where , and are the bin , contents , and uncertainties of the observed distributions shown in fig .[ fig : differential ] .possible values of are sometimes limited by the allowed range of measurement values , which could suppress heavy tails .for example , many quantities are fractions that must lie between 0 and 1 , and there is less room for two measurements with 10% uncertainty to disagree by than for two 0.01% measurements .the size of this effect was estimated using monte carlo methods to generate simulated data based on the values and uncertainties of the actual data , constrained by any obvious bounds on their allowed values .the simulated data was then fit to see if applying the bounds changed the fitted values for and .the largest effect was for medical data where was reduced by about 0.1 when minimally restrictive bounds were assumed .stronger bounds might exist for some quantities , but determining them would require careful measurement - by - measurement assessment beyond the scope of this study .for example , each measurement of the long - term duration of the effect of a medical drug or treatment would have an upper bound set by the length of that study .since correcting for bounds can only make smaller ( corresponding to even heavier tails ) , and the observed effects were negligible , no corrections were applied to the values of reported here .histograms of the distributions for different data sets are shown in fig . [fig : differential ] .the complementary cumulative distributions of the data are given in table [ tab : cl ] and shown in fig .[ fig : cumulative ] .|rr & 1 & 2 & 3 & 5 & 10 & & + ' '' '' interlab & & & & & & & + ( key ) & & & & & & & + nuclear & & & & & & & + particle & & & & & & & + ( stable ) & & & & & & & + medical & & & & & & & + constants & & & & & & & + ' '' '' normal ( gaussian ) & & & & & & & + student - t ( ) & & & & & & & + exponential & & & & & & & + student - t ( ) & & & & & & & + cauchy & & & & & & & + the observed probability of two measurements disagreeing by more than standard uncertainties for different data sets : .( see also table [ tab : cl ] ) ] none of the data are close to gaussian , but all can reasonably be described by almost - cauchy student - t distributions with . for comparison ,fits to these data with lvy stable distributions have nominal 4 to 30 times worse than the fits to student - t distributions .the number of `` '' ( i.e. ) disagreements observed is as high as , compared to the expected for a normal distribution .the fitted values for and are shown in table [ tab : fits ] .also shown in table [ tab : fits ] are two data subsets expected to be of higher quality , bipm interlaboratory key comparisons ( 372 quantities , 3712 measurements , 20245 pairs ) and stable particle properties ( 335 quantities , 3041 measurements , 16649 pairs ) . the key comparisons should define state - of - the - art accuracy , since they are measurements of important metrological standards carried out by national laboratories .stable particles are often easier to study than other particles , so their properties are expected to be better determined .both `` better '' data subsets do have narrower distributions consistent with higher quality , but they still have heavy tails .more selected data subsets are discussed in section [ ssec : selected subsets ] . [ cols="<,^,^,^,^,^,^,^,^,^ " , ] the key metrology data subset is for electrical , radioactivity , length , mass , and other similar physical metrology standards . to seeif the most experienced national laboratories were more consistent , table [ tab : special ] also lists selected metrology data from only the six national labs that reported the most key metrology measurements . these laboratories were ptb ( physikalisch - technische bundesanstalt , germany ) , nmij ( national metrology institute of japan ) , nist ( national institutes of standards and technology , usa ) , npl ( national physical laboratory , uk ) , nrc ( national research council , canada ) , and lne ( laboratoire national de mtrologie et dessais , france ) .similarly , key analytical chemistry data selected from the same national labs are also shown .these are for measurements such as the amount of mercury in salmon , pcbs in sediment , or chromium in steel .the metrology measurements by the selected national laboratories do have much lighter tails with , but this is not the case for their analytical measurements where .new stable particle data have the lightest tail in table [ tab : fits ] , but it is not clear if this is because the newer results have better determined uncertainties or are just more correlated . the trend in particle physics is for fewer but larger experiments , and more than a third of the newer stable measurements were made by just two very similar experiments ( belle and babar ) , so the new stable data is split into two groups in table [ tab : special ] .there is no significant difference between the belle / babar and other experiments data .nuclear lifetimes with small and large relative uncertainties were compared .they have similar tails , but the smaller uncertainty measurements appear to underestimate their uncertainty scales . measurements of newton s gravitation constant are notoriously variable , so a data - set without results was examined .the heavy tail is reduced , albeit with large uncertainty .the accuracy of uncertainty evaluations appears to be similar in all fields , but unsurprisingly there are noticeable differences in the relative sizes of the uncertainties . in particular , although individual physics measurements are not typically more reproducible than in medicine , they often have smaller relative uncertainty ( i.e. uncertainty / value ) as shown in fig .[ fig : precision ] .distribution of the relative uncertainty for data from fig .[ fig : differential ] . ]median ratio of the relative uncertainties ( newer / older ) for measurements in each pair as a function of the years between the two measurements : medical ( brown circles ) , particle ( green triangles ) , nuclear ( red squares ) , stable ( green dashed point - down triangles ) , constants ( orange diamonds ) . ]perhaps more importantly for discovery reproducibility , uncertainty improves more rapidly in physics than in medicine , as is shown in fig .[ fig : precisionimprovement ] .this difference in rates of improvement reflects the difference between measurements that depend on steadily evolving technology versus those using stable methods that are limited by sample sizes and heterogeneity .the expectation of reduced uncertainty in physics means that it is feasible to take a wait - and - see attitude towards new discoveries , since better measurements will quickly confirm or refute the new result .measurement uncertainty in nuclear and particle physics typically improves by about a factor of 2 every 15 years .constants data improve twice as fast , which is unsurprising since more effort is expected for more important quantities .physicists also tend not to make new measurements unless they are expected to be more accurate than previous measurements . in the data sets reported here , the median improvement in uncertainty of nuclear measurements compared to the best previous measurement of the same quantity is , and the improvement factors for constants , particle , and stable measurements are , , and .in contrast , medical measurements typically have greater uncertainties than the best previous measurements , with median .this is an understandable consequence of different uncertainty to cost relationships in physics and medicine .study population size is a major cost driver in medical research , so reducing the uncertainty by a factor of two can cost almost four times as much , which is rarely the case in physics .prior expectations exist for most measurements reported here except for the interlab data .such expectations may suppress heavy tails by discouraging publication of the anomalous results that populate the tails , since before publishing a result dramatically different from prior results or theoretical expectations , researchers are likely to make great efforts to ensure that they have not made a mistake .journal editors , referees and other readers also ask tough questions of such results , either preventing publication or inducing further investigation . for example , initial claims of evidence for faster - than - light neutrinos and cosmic inflation did not survive to actual publication .median value as a function of time difference between the two measurements in each pair : medical ( brown circles ) , particle ( green point - up triangles ) , nuclear ( red squares ) , constants ( orange diamonds ) , and stable ( green dashed point - down triangles ) . ] fig .[ fig : correlation ] shows that physics ( particle , nuclear , constants ) measurements are more likely to agree if the difference in their publication dates is small .such `` bandwagon effects '' are not observed in the medical data , and they are irrelevant for interlab quantities which are usually measured almost simultaneously .these correlations imply that measurements are biased either by expectations or common methodologies .such correlations might explain the small ( ) values of for nuclear , particle , and constants data , or it could be that researchers in these fields simply tend to overestimate the scale of their uncertainties .removing expectation biases from the physics data would likely make their tails heavier . although interlab data are not supposed to have any expectation biases , they are subject to methodological correlations due to common measurement models , procedures , and types of instrumentation , so even their tails would likely increase if all measurements could be made truly independent .in a famous dispute with cauchy in 1853 , eminent statistician irne - jules bienaym ridiculed the idea that any sensible instrument had cauchy uncertainties .a century later , however , harold jeffreys noted that systematic errors may have a significant cauchy component , and that the scale of the uncertainty contributed by systematic effects depends on the size of the random errors .the results of this study agree with earlier research that also observed student - t tails , but only looked at a handful of subatomic or astrophysics quantities up to .unsurprisingly , the tails reported here are mostly heavier than those reported for repeated measurements made with the same instrument ( ) , which should be closer to normal since they are not independent and share most systematic effects . instead of student - t tails , exponential tails have been reported for several nuclear and particle physics data sets , but in all cases some measurements were excluded . for example , the largest of these studies looked at particle data ( 315 quantities , 53322 pairs ) using essentially the same method as this paper , but rejected the 20% of the data that gave the largest contributions to the for each quantity , suppressing the heaviest tails . despite this data selection , all these studies have supra - exponential heavy tails for , and so are qualitatively consistent with the results of this paper .it is possible that averaging different quantities with exponential tails might produce apparent power - laws , but this would require wild variations in the accuracy of the uncertainty estimates . instead of looking directly at the shapes of the measurement consistency distributions , hedges compared particle physics and psychology results and found them to have similar compatibility , with typically almost half of the quantities in both fields having statistically significant disagreements .thompson and ellison reported substantial amounts of `` dark uncertainty '' in chemical analysis interlaboratory comparisons .uncertainty is `` dark '' if it does not appear as part of the known contributions to the uncertainty of individual measurements , but is inferred to exist because the dispersion of measured values is greater than expected based on the reported uncertainties .for example , six ( 21% ) of 28 bipm key comparisons studied had ratios ( ) of expected to observed standard deviations less than 0.2 .this agrees with the key analytical results in table [ tab : special ] ( which include some of the same key comparisons ) . for sample sizes matching the 28 comparisons , 20% of samples drawn from a student - t distributionwould be expected to have .the open science collaboration ( osc ) recently replicated 100 studies in psychology , providing some of the most direct evidence yet for poor scientific reproducibility . using the osc study s supplementary information, can be calculated for 87 of the reported original / replication measurement pairs , and 27 ( 31% ) disagree by more than , and 2 ( 2.3% ) by more than .this rate of disagreements is inconsistent with selection bias acting on a normal distribution unless the data are excluded , but can be explained by selection biased student - t data with , consistent with the medical data reported in table [ tab : fits ] . when a measurement turns out to be wrong , the reasons for this failure are often unknown , or at least unpublished , so it is interesting to look at examples where the causes were later understood or can be inferred. for medical research , heterogeneity in methods or populations is a major source of variance .the largest inconsistency in the medical dataset is in a comparison of fever rates after acellular versus whole - cell pertussis vaccines .the large variance can likely be explained by significant differences among the study populations and especially in how minor adverse events were defined and reported .the biggest values in the particle data come from complicated multi - channel partial wave analyses of strong scattering processes , where many dozens of quantities ( particle masses , widths , helicities , ) are simultaneously determined .significant correlations often exist between the fitted values of the parameters but are not always clearly reported , and evaluations may not always include the often large uncertainties from choices in data and parameterization .the largest disagreement in the interlab data appears to be an obvious mistake . in a comparison of radioactivity in water ,one lab reported an activity of bq / kg when the true value was about 31 . even without knowing the expected activity , the unreasonably small fractional uncertainty should probably have flagged this result .such gross errors can produce almost - cauchy deviations .for example , if the numerical result of a measurement is simply considered as an infinite bit string , then any `` typographical '' glitch that randomly flips any bit with equal probability will produce deviations with a distribution .one can hope that the best research will not be sloppy , but not even the most careful scientists can avoid all unpleasant surprises . in 1996 a team from ptb ( the national metrological institute of germany ) reported a measurement of that differed by from the accepted value ; it took 8 years to track down the cause a plausible but erroneous assumption about their electrostatic torque transmitter unit .a difference between the codata2006 and codata2010 fine structure constant values was due to a mistake in the calculation of some eighth - order terms in the theoretical value of the electron anomalous magnetic moment . a 1999 determination of avogadro s number by a team from japan s national research laboratory of metrology using the newer x - ray crystal density method was off by due to subtle silicon inhomogeneities . in an interlaboratory comparison measuring pcb contamination in sediments , the initial measurement by bam ( the german federal institute for materials research and testing ) disagreed by many standard uncertainties , butthis was later traced to cross - contamination in sample preparation .several nuclear half - lives measured by the us national institute for standards and technology were known for some years to be inconsistent with other measurements ; it was finally discovered that a nist sample positioning ring had been slowly slipping over 35 years of use .often discrepancies are never understood and are simply replaced by newer results .for example , despite bringing in a whole new research team to go over every component and system , the reason for a discordant nist measurement of planck s constant was never found , but newer measurements by the same group were not anomalous .heavy tails have many potential causes , including bias , overconfident uncertainty underestimates , and uncertainty in the uncertainties , but it is not immediately obvious how these would produce the observed t distributions with so few degrees of freedom . even when the uncertainty is evaluated from the standard deviation of multiple measurements from a normal distribution so that a student - t distribution would be expected , there are typically so many measurements that should be much larger than what is observed . exceptions to this are when calibration uncertainties dominate , since often only a few independent calibration points are available , or when uncertainties from systematic effects are evaluated by making a few variations to the measurements , but these can not explain most of the data .any reasonable publication bias applied to measurements with gaussian uncertainties can not create very heavy tails , just a distorted distribution with gaussian tails to produce one false published result would require bias strong enough to reject millions of studies . underestimating does not produce a heavy tail , only a broader normal distribution . mixing multiple normal distributions does not naturally produce almost - cauchy distributions , except in special cases such as the ratio of two zero - mean gaussians .the heavy tails are not caused by poor older results .the heaviest - tailed data in fig .[ fig : differential ] are actually the newest 93% of the interlaboratory data are less than 16 years old and eliminating older results taken prior to the year 2000 does not reduce the tails for most data as shown in table [ tab : fits ] .intentionally making up results , i.e. fraud , could certainly produce outliers , but this is unlikely to be a significant problem here .since most of the data were extracted from secondary meta - analyses ( e.g. review of particle properties , table of radionuclides , and cochrane systematic reviews ) , results withdrawn for misconduct prior to the time of the review would likely be excluded .one meta - analysis in the medical dataset does include studies that were later shown to be fraudulent , but the fraudulent results actually contribute slightly less than average to the overall variance among the results for that meta - analysis .modelling the heavy tails may help us understand the observed distributions .one way is to assume that the measurement values are normally distributed with standard deviation that is unknown but which has a probability distribution .the measured value is then expected to have a probability distribution this is essentially a bayesian estimate with prior and a normal likelihood with unknown variance .if the uncertainties are accurately evaluated and normal with variance , will be a narrow peak at . assuming that is a broad normal distribution leads to exponential tails for large .in order to generate student - t distributions , must be a scaled inverse chi - squared ( or gamma ) distribution in .this works mathematically , but why would variations in for independent measurements have such a distribution ?heavy tails can only be generated by effects that can produce a wide range of variance , so we must model how consistency testing is used by researchers to constrain such effects .consistency is typically tested using a metric such as the calculated chi - squared statistic for the agreement of measurements where is the weighted mean and are the standard uncertainties reported by the researchers . for accurate standard uncertainties, will have a chi - squared probability distribution with .if , however , the reported uncertainties are incorrect and the true standard uncertainties are , then it will be that is chi - squared distributed .researchers will likely search for problems if different consistency measurements have a poor , which typically means .the larger an unknown systematic error is , the more likely it is to be detected and either corrected or included in the reported uncertainty , so published results typically have .since is expected to have a chi - squared distribution , a natural prior for is indeed the scaled inverse chi - squared distribution needed to generate student - t distributions from equation [ eq : shlyakhter ] .more mechanistically , it could be assumed that a normally distributed systematic error will be missed by independent measurements if their is less than some threshold .if the distribution of all possible systematic effects is , then the probability distribution for the unfound errors will be where is the cumulative distribution . is unknown , but a common bayesian scale - invariant choice is , with . using this model with the reported uncertainty as the lower integration bound , the curve generated from equations [ eq : shlyakhter ] and [ eq : model ] is very close to a student - t distribution .the observed small values for mean that both and must be small .making truly independent consistency tests is difficult , so it is not surprising that the effective number of checks ( ) is usually small .this model is plausible , but why are systematic effects consistent with a power - law size distribution ?scientific measurements are made by complex systems of people and procedures , hardware and software , so one would expect the distribution of scientific errors to be similar to those produced by other comparable systems .power - law behaviour is ubiquitous in complex systems , with the cumulative distributions of observed sizes ( ) for many effects falling as , and these heavy tails exist even when the system has been designed and refined for optimal results .a student - t distribution has cumulative tail exponent , and the values for reported here are consistent with power - law tails observed in other designed complex systems .the frequency of software errors typically has a cumulative power - law tail corresponding to small , and in scientific computing these errors can lead to quantitative discrepancies orders of magnitude greater than expected .the size distribution of electrical power grid failures has , and the frequency of spacecraft failures has .even when designers and operators really , really want to avoid mistakes , they still occur : the severity of nuclear accidents falls off only as , similar to the power - laws observed for the sizes of industrial accidents and oil spills .some complex medical interventions have power - law distributed outcomes with . combining the observed power - law responses of complex systems with the power - law constraints of consistency checking for systematic effects discussed in section [ ssec : modelling ], leads naturally to the observed consistency distributions with heavy power - law tails .there are also several theoretical arguments that such distributions should be expected .a systematic error or mistake is an example of a risk analysis incident , and power - law distributions are the maximal entropy solutions for such incidents when there are multiple nonlinear interdependent causes , which is often the case when things go wrong in research .scientists want to make the best measurements possible with the limited resources they have available , so scientific research endeavours are good examples of highly structured complex systems designed to optimize outcomes in the presence of constraints .such systems are expected to exhibit `` highly optimized tolerance '' , being very robust against designed - for uncertainties , but also hypersensitive to unanticipated effects , resulting in power - law distributed responses .simple continuous models for highly optimized tolerant systems are consistent with the heavy tails observed in this study .these models predict that , where is the effective dimensionality of the system , but larger values of arise when some of the resources are used to avoid large deviations , e.g. spending time doing consistency checks .if one believes that mistakes can be eliminated and all systematic errors found if we just work hard enough and apply the most rigorous methodological and statistical techniques , then results from the best scientists should not have heavy tails .such a belief , however , is not consistent with the experienced challenges of experimental science , which are usually hidden in most papers reporting scientific measurements .as beveridge famously noted , often everyone else believes an experiment more than the experimenters themselves .researchers always fear that there are unknown problems with their work , and traditional error analysis can not `` include what was not thought of '' .it is not easy to make accurate _ a priori _ identifications of those measurements that are so well done that they avoid having almost - cauchy tails .expert judgement is subject to well - known biases , and obvious criteria to identify better measurements may not work .for example , the open science collaboration found that researchers experience or expertise did not significantly correlate with the reproducibility of their results the best predictive factor was simply the statistical significance of the original result .the best researchers may be better at identifying problems and not making mistakes , but they also tend to choose the most difficult challenges that provide the most opportunities to go wrong . reducing heavy tailsis challenging because complex systems exhibit scale invariant behaviour such that reducing the size of failures does not significantly change the shape of their distribution . improving sensitivity makes previously unknown small systematic issues visible so they can be corrected or included in the total uncertainty .this improvement reduces , but even smaller systematic effects now become significant and tails may even become heavier and smaller . comparing figures [ fig : differential ] and [ fig : precision ], it appears that data with higher average relative uncertainty tend to have heavier tails .this relationship between relative uncertainty and measurement dispersion is reminiscent of the empirical horwitz power - law in analytical chemistry , where the relative spread of interlaboratory measurements increases as the required sensitivity gets smaller , and of taylor s law in ecology where the variance grows with sample size so that the uncertainty on the mean does not shrink as . in principle, statistical errors can be made arbitrarily small by taking enough data , and can be made arbitrarily large by making enough independent consistency checks , but researchers have only finite time and resources so choices must be made .taking more consistency check data limits the statistical uncertainty , since it is risky to treat data taken under different conditions as a single data set .consistency checks are never completely independent since it is impossible for different measurements of the same quantity not to share any people , methods , apparatus , theory or biases , so researchers must decide what tests are reasonable .the observed similar small values for may reflect similar spontaneous and often unconscious cost - benefit analyses made by researchers . the data showing the lightest tail reported here ( in table [ tab : special ] )may provide some guidance and caution .the high quality of the selected metrology standards measurements by leading national laboratories shows that heavy tails can be reduced by collaboratively taking great care to ensure consistency by sharing methodology and making regular comparisons .there are , however , limits to what can be achieved , as illustrated by the much heavier tail of the analytical standards measured by the same leading labs .second , consistency is easier than accuracy .interlaboratory comparisons typically take place over relatively short periods of time , with participating institutions using the best standard methods available at that time .biases in the standard methods may only be later discovered when new methods are introduced .for example , work towards a redefinition of the kilogram and the associated development of new silicon atom counting technology revealed inconsistencies with earlier watt - balance measurements , and this has driven improvements in both methods . finally , selection bias that hides anomalous results is hard to eliminate . for one metrology key comparison ,results from one quantity were not published because some laboratories reported `` incorrect results '' . reducing tails is particularly challenging for measurements where the primary goal is improved sensitivity that may lead to new scientific understanding . by definition ,a better measurement is not an identical measurement , and every difference provides room for new systematic errors , and every improvement that reduces the uncertainty makes smaller systematic effects more significant .frontier measurements are always likely to have heavier tails .published scientific measurements typically have non - gaussian almost - cauchy error distributions , with up to of results in disagreement by .these heavy tails occur in even the most careful modern research , and do not appear to be caused by selection bias , old inaccurate data , or sloppy measurements of uninteresting quantities . for even the best scientists working on well understood measurements using similar methodology , it appears difficult to achieve consistency better than , with about of results expected to be outliers , a rate a thousand times higher than for a normal distribution . these may , however , be underestimates . because of selection / confirmation biases and methodological correlations , historical consistency can only set lower bounds on heavy tails multiple measurements may all agree but all be ( somewhat ) wrong .the effects of unknown systematic problems are not completely unpredictable .scientific measurement is a complex process and the observed distributions are consistent with unknown systematics following the low - exponent power - laws that are theoretically expected and experimentally observed for fluctuations and failures in almost all complex systems .researchers do determine the scale of their uncertainties with fair accuracy , with the scale of medical uncertainties ( ) slightly more consistent with the expected value ( ) than in physics ( ) .medical and physics research have comparable reproducibility in terms of how well different studies agree within their uncertainties , consistent with a previous comparison of particle physics with social sciences .medical research may have slightly lighter tails , while physics results typically have better relative uncertainty and greater statistical significance .understanding that error distributions are often almost - cauchy should encourage use of t - based , median , and other robust statistical methods , and supports choosing student - t or cauchy priors in bayesian analysis .outlier - tolerant methods are already common in modern meta - analysis , so there should be little effect on accepted values of quantities with multiple published measurements , but this better understanding of the uncertainty may help improve methods and encourage consistency .false discoveries are more likely if researchers apply normal conventions to almost - cauchy data . although much abused , the historically common use of as a discovery criterion suggests that many scientists would like to be wrong less than 5% of the time . if so , the results reported here support the nominal 5-sigma discovery rule in particle physics , and may help discussion of more rigorous significance criteria in other fields . this study should help researchers better understand the uncertainties in their measurements , and may help decision makers and the public better interpret the implications of scientific research . if nothing else , it should also remind everyone to never use normal / gaussian statistics when discussing the likelihood of extreme results .i thank the students of the university of toronto advanced undergraduate physics lab for inspiring consideration of realistic experimental expectations , and the university of auckland physics department for their hospitality during very early stages of this work .i am grateful to d. pitman for patient and extensive feedback , to r. bailey for his constructive criticism , to r. cousins , d. harrison , j. rosenthal , and p. sinervo for useful suggestions and discussion , and to m. cox for many helpful comments on the manuscript .the sources for all data analysed are listed in the associated ancillary file : uncertaintydatadescription.xls .rosenfeld ah .1968 are there any far - out mesons or baryons ? in _meson spectroscopy _ ( eds baltay c , rosenfeld ah ) .new york , usa : w. a. benjamin .. 455483 .https://archive.org/details/mesonspectroscopy [ ] nakagawa s , cuthill ic .2007 effect size , confidence interval and statistical significance : a practical guide for biologists .rev . _ * 82 * , 591605 . http://dx.doi.org/10.1111/j.1469-185x.2007.00027.x [ ] sinervo pk .2003 definition and treatment of systematic uncertainties in high energy physics and astrophysics . in _phystat 2003 , stanford , usa _( eds lyons l , mount rp , reitmeyer r ) .http://stanford.io/2fdpkta [ ] bravin e , brun g , dehning b , drees a , galbraith p , geitz m , henrichsen k , koratzinos m , mugnai g , tonutti m. 1998 the influence of train leakage currents on the lep dipole field . _ nucl .instrum . meth . a _ * 417 * , 915 .http://dx.doi.org/10.1016/s0168-9002(98)00020-5 [ ] attivissimo f , cataldo a , fabbiano l , giaquinto n. 2011 systematic errors and measurement uncertainty : an experimental approach ._ measurement _ * 44 * , 17811789 .http://dx.doi.org/10.1016/j.measurement.2011.07.011 [ ]willink r. 2004 approximating the difference of two t - variables for all degrees of freedom using truncated variables .n. z. j. stat . _* 46 * , 495504 .[ ] faller je .2014 precision measurement , scientific personalities and error budgets : the sine quibus non for big g determinations .r. soc . a _ * 372 * , 20140023 .http://dx.doi.org/10.1098/rsta.2014.0023 [ ]bienaym m. 1853 considrations a lappui de la dcourverte de laplace sur la loi de probabilit dans la mthode des moindres carrs . _ c. r. acad .sci . _ * xxxvii * , 309324 .https://books.google.com/books?id=qhjfaaaacaaj&pg=pa322 [ ]zhang l , prietsch so , axelsson i , halperin sa .2012 acellular vaccines for preventing whooping cough in children . _cochrane database of systematic reviews _http://dx.doi.org/10.1002/14651858.cd001478.pub5 [ ] .2010 almera proficiency test determination of naturally occurring radionuclides in phosphogypsum and water ( iaea - cu-2008 - 04 ) .austria : international atomic energy agency iaea / aq/15 . http://www-pub.iaea.org/mtcd/publications/pdf/iaea-aq-15_web.pdf [ ] fujii k , tanaka m , nezu y , nakayama k , fujimoto h , de bievre p , valkiers s. 1999 determination of the avogadro constant by accurate measurement of the molar volume of a silicon crystal ._ metrologia _ * 36 * , 455464 . http://dx.doi.org/10.1088/0026-1394/36/5/7 [ ] carlisle j , pace n, cracknell j , mller a , pedersen t , zacharias m. 2013 ( 5 ) what should the cochrane collaboration do about research that is , or might be , fraudulent ? _ cochrane database of systematic reviews_. http://dx.doi.org/10.1002/14651858.ed000060 [ ] dose v , von der linden w. 1999 outlier tolerant parameter estimation . in _maximum entropy and bayesian methods : garching , germany 1998 _ vol .105 of _ fundamental theories of physics_. netherlands : springer .. 4756 .http://dx.doi.org/10.1007/978-94-011-4710-1_4 [ ]dobson i , carreras ba , lynch ve , newman de .2007 complex systems analysis of series of blackouts : cascading failure , critical points , and self - organization ._ chaos _ * 17 * , 026103 .http://dx.doi.org/10.1063/1.2737822 [ ] karimova lm , kruglun oa , makarenko ng , romanova nv .2011 power law distribution in statistics of failures in operation of spacecraft onboard equipment .res . _ * 49 * , 458463 .http://dx.doi.org/10.1134/s0010952511040058 [ ] sornette d , maillart t , kroeger w. 2013 exploring the limits of safety analysis in complex technological systems .. j. disaster risk reduct . _ * 6 * , 5966 . | judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties , but it is often unclear how well these have been evaluated and what they imply . reported scientific uncertainties were studied by analysing 41000 measurements of 3200 quantities from medicine , nuclear and particle physics , and interlaboratory comparisons ranging from chemistry to toxicology . outliers are common , with disagreements up to five orders of magnitude more frequent than naively expected . uncertainty - normalized differences between multiple measurements of the same quantity are consistent with heavy - tailed distributions that are often almost cauchy , far from a gaussian normal bell curve . medical research uncertainties are generally as well evaluated as those in physics , but physics uncertainty improves more rapidly , making feasible simple significance criteria such as the discovery convention in particle physics . contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable . such errors appear to have power - law distributions consistent with how designed complex systems fail , and how unknown systematic errors are constrained by researchers . this better understanding may help improve analysis and meta - analysis of data , and help scientists and the public have more realistic expectations of what scientific results imply . |
hantaviruses are rodent - borne zoonotic agents that may cause diseases in humans such as hemorrhagic fever with renal syndrome and hantavirus pulmonary syndrome .hantaviruses have been identified at an increasing rate in recent years , and as of now about thirty different ones have been discovered throughout the world .one of these , the sin nombre virus , was not isolated until 1993 after an outbreak in the four corners region in the usa .the host of this particularly dangerous virus is the deer mouse , _ peromyscus maniculatus _ , the most numerous mammal in north america .the virus produces a chronic infection in the mouse population , but it is not lethal to them .it is believed that transmission in the rodent population is horizontal and due to fights , and that the subsequent infection of humans , where the mortality rate can be as high as fifty percent , is produced by their contact with the excreta of infected mice .moreover , so far there is no vaccine or effective drug to prevent or treat the hantavirus pulmonary syndrome .therefore , a major effort has been made to understand the population dynamics of deer mice colonies in order to design effective prevention policies .it has been noted that environmental conditions are directly connected to outbreaks of hanta .for instance , the 1993 and 1998 outbreaks occurring in the four corners region have been associated with the so - called southern oscillation .related to this and other such observations , the effects of seasonality in ecological systems have been a subject of recent interest .multi - year oscillations of mammal populations , prey - predator seasonal dynamics , and persistence of parasites in plants between seasons are examples that illustrate the importance of seasonality in population dynamics .recently , abramson and kenkre have proposed a phenomenological model for mice population that successfully reproduces some features of hantavirus propagation . in particular, that model explores the relation between resources in the medium , carrying capacity , and the spread of the infection in the rodent colony .herein we study the effects of seasonality in that model .our motivation is not only to provide more realism to the model , but also to investigate the counterintuitive effects that dynamic alternation may cause in a biological system .brownian motors and switching - induced morphogenesis are examples that show that alternation in time of `` uninteresting '' dynamics may produce `` interesting '' outcomes . along these lines, we will show that alternation of seasons , neither of which by itself fulfills the environmental requirements on the carrying capacity for spreading of the infection , may produce an outbreak of the disease .the mechanism driving this behavior is the interruption of the relaxation process that equilibrates the mouse population from season to season : if the duration of seasons becomes shorter than the relaxation time of the population , the disease spreads .the paper is organized as follows . in sec .[ sec2 ] we briefly review the model for mouse populations introduced in . in sec .[ sec3 ] we explain how seasonality is introduced in that model and analyze the conditions for hanta outbreaks to take place due to the alternation of seasons . the exact solution of the model and a particular example that illustrates the phenomenology is given in sec .[ sec4 ] . finally , in sec.[sec5 ] , we summarize the main conclusions and propose some directions for future work .the model introduced in for the temporal evolution of a population of mice subjected to the hantavirus infection reads : where and are the population densities of susceptible and infected mice respectively , is the total population of mice , and , , and are positive constants .the terms on the right hand sides of eqs .( [ model1 ] ) and ( [ model2 ] ) take into account the following processes : births with rate constant , depletion by death with rate constant , competition for the resources in the medium characterized by the carrying capacity , and transmission of the infection with rate coefficient .it is worth noting that infected pregnant mice produce hanta antibodies that keep their fetus free from the infection ; that is , _ all _ mice are born susceptible , as indicated by the absence of a birth term in eq .( [ model2 ] ) .note also the absence of a recovery term in the model since , as mentioned earlier , mice become chronically infected by the virus . the system of equations ( [ model1 ] ) and ( [ model2 ] ) has four equilibrium points .two of them are irrelevant for the analysis : the null state , which is always unstable if ( a condition that we will assume throughout this paper ) , and a meaningless state with for any value of the parameters .the other two equilibria are the stability of the equilibrium points ( [ equilibrio1 ] ) and ( [ equilibrio2 ] ) depends on the value of the carrying capacity . if then ( [ equilibrio1 ] ) is stable and ( [ equilibrio2 ] ) unstable . if it is the other way around .that is , when the available resources , , are below the critical value , , the infection does not propagate in the colony , the whole population of mice grows healthy , and its size increases proportionally with those resources .as soon as surpasses the virus spreads in the colony , the susceptible mouse population saturates , and the fraction of infected mice becomes larger as increases ( see fig .[ fig1 ] ) .the four corners region , where an important number of cases of hantavirus pulmonary syndrome have occurred , has a desert climate .the largest climate variations within this region come from the alternation between dry and rainy seasons .we will therefore assume alternation in time of these two seasons .it is important to remark that a two - season assumption is not crucial , and that the analysis with four seasons is also straightforward within the formalism introduced herein . during each of the two seasons under considerationwe assume there to be no climate variations , so that each season can be characterized by a set of time - independent parameters where .we implement square - periodic season alternation where the duration of each season is . again, this particular alternation pattern is not essential for the mechanism .other schemes of season alternation , e.g. different duration of the seasons or random switching between seasons , do not qualitatively change the phenomenology .any quantity alternating in the way described above can be written as : where and is a periodic square wave let us now suppose the following conditions for the sets according to seasonality : where 1 stands for the rainy season and 2 for the dry one .the biological motivation for these conditions is the following .the dry season provides less resources for the colony than the rainy season , and as a consequence the death rate is higher , and the transmission rate is also larger since fights for the available resources are expected to increase .however , notice that we consider the birth rate to be larger during the dry season .there are two reasons for this .first is the assumption that the colony makes an attempt to maintain its population .second is an implementation of the maturation process in the model : baby mice do not contribute to the propagation of the disease , and , therefore , even if births were actually more numerous during the rainy season , the contribution of this fact to the propagation of the infection will only be important in the next ( dry ) season , when mice have matured and are ready to fight .it will be shown later that these assumptions lead to a situation with high population during the rainy season and low population during the dry one , in agreement with the available data .moreover , we assume that , i.e. , the resources are _ all times _ ( during both seasons ) _ below _ the minimum critical threshold that triggers the propagation of the disease .we will show that nevertheless it is possible for the infection to spread .the equilibrium populations of the susceptible and infected mice are determined by the set of values .when switching from season to season , the populations evolve trying to reach a new equilibrium .therefore , the dynamics is driven by the competition between two characteristic times .on the one hand there is an _ external _ time scale determined by the seasonal forcing , .on the other hand , the relaxation toward equilibrium after a switching of seasons involves a relaxation time . the latter measures the time required for the mouse colony to relax to the equilibrium state associated with after having been driven during the previous season by the conditions , that is , where .the _ internal _ time scale is defined as the fastest relaxation process , i.e. , $ ] .`` fast '' or `` slow '' seasonal alternation then refers to the comparison between these two time scales . if the mouse population has enough time to accommodate to the new conditions from season to season and relax to equilibrium .moreover , since we have imposed the condition that the resources at any time of the year are below the critical thresholds , there will be no infected mice . in the other limit , , seasonal changes occur too fast , the relaxation process is interrupted , and no equilibrium can be reached from season to season .then note that an adiabatic elimination can be implemented , and in eq .( [ alternation ] ) can be replaced by its average value , .therefore , in the limit of fast season alternation the system is driven by the set of averaged values , and the critical carrying capacity is given by .as a consequence , it is possible to find regions of parameters where is smaller than the _ effective _ value of the carrying capacity associated with the averaged values : ^{-1}=\left[\frac{1}{2}\left(\frac{1}{k_{1}}+\frac{1}{k_{2}}\right)\right]^{-1}=\frac{2k_{1}k_{2}}{k_{1}+k_{2}},\ ] ] and the infection propagates .general conditions leading to this behavior can be posed , but the expressions are rather cumbersome .we prefer , for the sake of simplicity , to show a particular typical case .we use the following values for the parameters : that lead to and respectively .the dynamics are completely determined once the value of the carrying capacity during each season is specified . according to the previous discussion, these parameters can be chosen such that the following conditions hold : ^{-1}>k_{c+ } , \quad k_{1}>k_{2 } , \quad k_1<\min(k_{c1},k_{c2}).\ ] ] these conditions lead to the points that fulfill the seasonal requirements given by eq .( [ conditions ] ) , so that slow alternation of seasons leads to infection - free states while fast alternation leads to hanta outbreaks .this region is plotted in fig .notice in particular that the point lies inside the region and that . in the next sectionwe illustrate the seasonality - induced propagation of the disease for this particular point .so far we have determined that outbreaks of hanta induced entirely by seasonal changes can occur if the duration of the seasons are short enough .now we establish the meaning of `` short enough '' quantitatively .since is strictly smaller than the effective value of the carrying capacity , there should be a finite value of such that for any the population of infected mice is greater than zero , but for periods above this critical period the infected population goes to zero . in order to obtain the value of the critical period we need to solve the system of equations ( [ model1 ] ) and ( [ model2 ] ) . in spite of its nonlinearitiesthe system can be solved analytically by means of a reciprocal transformation and the following exact solution is obtained : where and are the initial conditions for and respectively , and the following definitions have been introduced , because the external forcing due to the alternation of seasons is periodic , floquet theory ensures the existence of a periodic solution .the values of and compatible with the non - equilibrium periodic solution can be obtained by evolving the system during the first half of a period under dynamics and the second half under dynamics , and forcing periodity on the solutions after a whole period of evolution , that is , in order to close the system in the non - equilibrium steady state and , the values of and that solve that system of equations ( [ algebra1 ] ) and ( [ algebra2 ] ) must be then re - introduced in eqs .( [ solution1 ] ) and ( [ solution2 ] ) .the critical period is then the largest value of satisfying the condition we illustrate the procedure with the example mentioned above where the parameters are given by eqs .( [ parameters1 ] ) and ( [ parameters2 ] ) , and with and . the results are shown in figs .[ fig3 ] and [ fig4 ] .the values of and as a function of the period of the seasons are depicted in fig .[ fig3 ] , where the populations of the susceptible and infected mice at the end of each season are given . as seen in that figure ,the value of the critical period is .note that if the alternation is slow , , all mice grow healthy . on the other hand , if the alternation is faster than the relaxation time required by the colony to accommodate its population from season to season , , the virus spreads and .notice that in the limit the dynamics is driven by and the populations of susceptible and infected mice are given by eq .( [ equilibrio2 ] ) with , , , and .let us stress again that the carrying capacity is below its critical threshold at any time . in fig .[ fig4 ] we plot , for different period lengths , the exact solutions and as a function of time through one period of evolution .the first semi - period corresponds to the rainy season and the second to the dry season . when seasons last long ( left panel ) , there are no infected mice and the susceptible population simply oscillates between the two equilibrium states given by eq .( [ equilibrio1 ] ) . for sufficiently short seasons ( right panel ) , there is propagation of the disease and the values of and fluctuate around the equilibrium points determined by eq .( [ equilibrio2 ] ) and the set of parameters . finally ,when the period of the seasons is near , but below , the critical period ( central panel ) , the infected population is small and the population oscillates in a more pronounced manner .by introducing seasonality in a paradigmatic model for hantavirus propagation in mice colonies , we have shown that the alternation of seasons may cause outbreaks of the disease .the striking feature of that behavior lays in the fact that neither season satisfies the conditions for the infection to spread in terms of the availability of resources .the mechanism responsible for the phenomenon is the competition between two time scales : an external one , the duration of a season , and an internal one , the relaxation time for the mouse colony to equilibrate its population from season to season .we have shown that if the duration of the seasons is longer than the relaxation time , no propagation of hantavirus occurs .on the other hand , if the relaxation process is interrupted by a fast seasonal alternation , the disease spreads .we have analyzed the general conditions for which the phenomenon occurrs .moreover , we have illustrated the mechanism with a particular example that can be solved exactly .this work may help to clarify the reported relation between climate and propagation of hanta in mice populations . however , to elucidate whether the proposed phenomenon actually takes place in nature we depend on data that unfortunately are not available in the literature .one can envision further modifications of the model that may improve its features , such as , for example , the inclusion of spatial dependence or of noisy contributions to the dynamics .finally , we stress that the general idea underlying the mechanism is model - insensitive and can therefore be extended to other systems where seasonality plays a relevant role .work along these directions is in progress .the authors gratefully acknowledge fruitful comments from and discussions with v. m. kenkre during the elaboration of this work . c. escudero is grateful to the department of chemistry and biochemistry of the university of california , san diego for its hospitality . this work has been partially supported by the national science foundation under grant phy-9970699 , by the ministerio de educacin y cultura ( spain ) through grants no .ap2001 - 2598 and ex2001 - 02880680 , and by the ministerio de ciencia y tecnologa ( spain ) , projectbfm2001 - 0291 .99 j. n. mills , t. l. yales , t. g. ksiazek , c. j. peters and j. e. childs , emerg .dis . * 5 * , 95 ( 1999 ) .d. m. engelthaler , d. g. mosley , j. e. cheek , c. e.levy , k. k. komatsu , p. ettestad , t. davis , d. t. tanda , l. miller , j. w. frampton , r. porter and r. t. bryan , ibid . * 5 * , 87 ( 1999 ) .j. n. mills , t. g. ksiazek , c. j. peters and j. e. childs , ibid . * 5 * , 135 ( 1999 ) .a. j. kuenzi , m. l. morrison , d. e. swann , p. c. hardy and g. t. downard , ibid . * 5 * , 113 ( 1999 ) .k. d. abbott , t. g. ksiazek and j. n. mills , ibid .* 5 * , 102 ( 1999 ) . c. h. calisher , w. sweeney , j. n. mills and b. j. beaty , ibid .* 5 * , 126 ( 1999 ) . v. w. kenkre , _ memory formalism , nonlinear techniques , and kinetic equation approaches _ , in _ modern challenges in statistical mechanics : patterns , noise , and the interplay of nonlinearity and complexity _ , aip conference proceedings * 658 * , 2003 . | using a model for rodent population dynamics , we study outbreaks of hantavirus infection induced by the alternation of seasons . neither season by itself satisfies the environmental requirements for propagation of the disease . this result can be explained in terms of the seasonal interruption of the relaxation process of the mouse population toward equilibrium , and may shed light on the reported connection between climate variations and outbreaks of the disease . |
wireless body area networks ( wbans ) represent the next generation of personal area networks .the elementary components of a typical wban are sensors and a gateway device , which is alternatively known as a hub .a traditional wban has a centralized topology , which is coordinated by the central hub .it has a round - robin direct data exchange behavior between each sensor and gateway node .this kind of system is widely used for patients monitoring and athletes training .there were approximately 11 million active units around the word in 2009 , and this number is predicted to reach 420 million by 2014 .the pervasive use of wban thus increases the need for good coexistence between multiple wbans .imagine patients wearing such a system in a medical centre , the number of wbans closely located is large in some periods of the day and they can move rapidly with respect to each other .due to wbans nature of high mobility and potential large density , it is generally not feasible to have a global coordinator in such a circumstance .therefore , this leads to a challenging issue - interference between multiple closely located wbans .interference among wbans is a major critical issue that can cause performance degradation and hence is a threat to reliable operation of any wban .a proposed method that will be investigated in this paper is to use two - hop cooperative communication with an opportunistic relaying scheme .recently , two - hop cooperative communications , which is included as an option in the ieee 802.15.6 standard , has been proved to overcome typical significant path loss experienced in single - link star topology wban communication .several such cooperative communication schemes have been investigated for wbans using either narrow - band or ultra - wideband .its effectiveness in interference mitigation has also been studied using realistic on - body channel data and a simulated model of inter - wban channel data .it has been shown in that the use of cooperative communication in any wban - of - interest is able to significantly mitigate interference by providing an up - to 12 db improvement in sinr outage probability . in this paper , the work is extended from , some similarities are : 1 .intra- and inter - wban access scheme : time division multiple access(tdma ) across multiple wbans is employed as well as within the intra - wban access scheme since it provides better interference mitigation with respect to power consumption and channel quality ; 2 .three - branch cooperative communication : a wban system in this paper uses two relay nodes to provide extra diversity gain at the receiver .three branches are used with one direct link from sensor to gateway node and two additional links via two relays ; 3 .intra - wban channel model : extensive on - body channel gain measurements are adopted as the inter - wban channel model for the analysis by simulation .there are also some major differences in our simulation setup : 1 .diversity combining scheme : instead of a three - branch selection combining ( sc ) scheme at the gateway node , an opportunistic relaying ( or ) method is simulated . or reduces complexity when compared with sc by adopting the concept that only a single relay with the best network path towards the destination forwards a packet per hop ; 2 .inter - wban channel model : a major contribution in this paper is that performance of cooperative wbans using opportunistic relaying with no cooperation between wbans is investigated with real inter - body channel gain measurements . in a multi - wbans co -existence scenario , a wban s performance is more interference limited .therefore , the effectiveness of the proposed scheme is analysed based on signal - to - interference - and - noise ratio ( sinr ) .since the duration of outage is more important than the probability of outage in analyzing the performances of a communication system with multiple co - channel interferers , then here our analysis shows both the first order statistics of outage probability and second order statistics of level crossing rate ( lcr ) for sinr .statistics are compared with respect to a traditional star topology wban with the same channel data employed .the model of a wban system used in the simulation consists of one hub ( gateway node ) , two relay nodes and three sensor nodes , which are organized in a star topology . in order to cooperatively use measured channel data , the hub and two relaysare placed separately at one of three locations : chest , left and right hips .for example , fig .[ fig : wban configuration ] shows the situation where hub is at chest and relays are at left and right hips respectively . in terms of sensor devices , they are located at any of the rx places listed in table [ node location ] . within each wban ,sensor nodes are coordinated by the hub using a time division multiple access ( tdma ) scheme .therefore , as soon as the hub sends out a beacon signal to all connected sensors , they respond by transmitting required information back to hub in a pre - defined sequence during their allocated time slots . during the transmission period ,one of the relays assists transmission by providing another copy of the signal to the hub .the choice of relay is based on an opportunistic relaying scheme that will be explained in a later section . in this paper, it is assumed that only one information packet is sent from each sensor during its allocated time slot .after completion of transmission from all sensor nodes , the system becomes idle until the next beacon period . for best co - channel interference mitigation and for reducing power consumption , tdma is adopted as the co - channel access scheme across wbans , as well as being the access scheme within any wban .assume there are total wbans co - located ( or in close proximity ) and the number is fixed during the period of simulation , then the shared channel can be evenly divided into time slots with a length of .therefore , each wban goes into idle status for a length of , which is also denoted as , after it completes transmission .however , due to lack of a global coordinator among multiple wbans , the execution of the tdma scheme used here is slightly different from the traditional implementation of tdma . here , in our tdma schemethe coordinator in each wban chooses the transmission time of every superframe randomly , following a uniform random distribution over $ ] . in this paper , two wbans employing this same configurationare used in analysis .the extensive on - body wban channel data was measured with small wearable channel sounders operating at 2.36 ghz over several hours of normal everyday activity of an adult subject .this process was repeated several times with devices on different experiment subjects .the measuring system on a single subject consisted of three transceivers and 7 receivers .their locations are listed in table [ node location ] . according to the experiment setupas shown in table [ node location ] , samples were taken over a period of two hours . during this process , three transceivers broadcasted in turn at 0 dbm in a round - robin fashion , with each one occupying the channel for 5 ms .hence , each transceiver transmits every 15 ms .while one was transmitting , the remaining channel sounders recorded the received signal strength indictor ( rssi ) if a packet was successfully detected . .tx/ rx radio locations , x indicates a channel measurement .lh - left hip , rh - right hip , c - chest , hd - head , rw - right wrist , lw - left wrist , lar - upper left arm , la - left ankle , ra - right ankle , b - back [ cols="^,^,^,^,^,^",options="header " , ] [ simulation combination ] outage probabilities at given sinr thresholds are defined as the probability of sinr value being smaller than a given threshold the performance of single link communication and two - hop cooperative communication schemes are compared with respect to sinr threshold at 1% and 10% outage probability . here, 10% outage probability corresponds to a guideline for 10% maximum packet error rates in the ieee 802.15.6 ban standard .[ fig : outage probability ] shows the outage probability for situations when different subjects are treated as the person - of - interest .note that in fig .[ fig : subjectone_outage ] , the relevance of choosing a starting sample index is explained in the next paragraph . for subject one, the cooperative communication scheme provides about db improvement over traditional single link communications at 10% outage probability . at 1% outage probability ,the improvement is even more significant , there is , in fact , more than 15 db improvement .in contrast , simulation on subject two shows similar results , with 4 db and 6 db improvement at 10% and 1% outage probabilities respectively . however , while running simulations using a different starting channel sample index , it is found that channel stability has a significant impact on the performance of the cooperative communication scheme .[ fig : channel gain plot ] is a typical on - body channel gain plot for subject one .it is clear that there is a significant change in operating environment at the point where the red line is placed in fig .[ fig : channel gain plot ] .subject one s on - body channels become more unstable after that point , i.e. channel coherence time decreases rapidly .hence , simulation is also performed with channel sample taken from , and result is plotted in fig .[ fig : outage probability of subject one first half ] . compared to fig .[ fig : subjectone_outage ] where channel sample is taken from , cooperative communication scheme provides little sinr improvements at both 10% and 1% outage probabilities in this situation . as a result, two - hop cooperative communication schemes can provide even more improvement of the system performance when the environment in which the wban / s are operating changes rapidly .level crossing rate(lcr ) of time - varying sinr is defined as the average frequency of a received packet s sinr value going below a given threshold in the positive direction .assume there are a total of crossings at threshold , then the corresponding lcr value is calculated as where is the time between the and crossing .[ fig : lcr of subject one second half ] and [ fig : lcr of subject two ] show that cooperative two - hop communications are able to reduce level crossing rate at low sinr threshold values significantly . at lcr of 1 hz, the sinr threshold value for subject one rises by an average of 6 db as shown in fig .[ fig : lcr of subject one second half ] with the use of cooperative communications . for subject two , the improvement , as shown in fig .[ fig : lcr of subject two ] , is about 4 db .system performance remains similar at high sinr threshold values .in addition , it can similarly be observed from fig .[ fig : lcr of subject one first half ] in terms of outage probability , when coherence time of the on - body channels in the wban - of - interest is large ( i.e. , channels are more stable ) . in the case of fig .[ fig : lcr of subject one first half ] , the lcr curves overlap most of the time for single - hop communications and cooperative communications schemes . therefore , in such a case , there is no real performance advantage in terms of level crossing rate to use our proposed cooperative communication scheme .in this paper , a three - branch opportunistic relaying scheme was investigated in a wban - of - interest under the circumstance where multiple wbans co - exist non - cooperatively .tdma was employed , as a suitable inter - network , as well as intra - network , multiple access scheme .empirical inter - wban and intra - wban channel gain measurements were adopted to enable simulation of a practical working environment of a typical wban .performance was evaluated based on outage probability and level crossing rate of received packets sinr values . it has been found that opportunistic relaying can provide an average of 5 db improvement at an sinr outage probability of 10% .it also reduces the level crossing rate significantly at low sinr threshold values .however , the performance of opportunistic relaying relies on the quality of the on - body channel greatly .opportunistic relaying is particularly more useful , than single - link star topology communications , in a rapidly changing environment .d. smith and d. miniutti , `` cooperative body - area - communications : first and second - order statistics with decode - and - forward , '' in _ wireless communications and networking conference ( wcnc ) , 2012 ieee _ , paris , france , apr .r. derrico , r. rosini , and m. maman , `` a performance evaluation of cooperative schemes for on - body area networks based on measured time - variant channels , '' in _ communications ( icc ) , 2011 ieee international conference on _ , june 2011 , pp .p. ferrand , m. maman , c. goursaud , j .-m . gorce , and l. ouvry , `` performance evaluation of direct and cooperative transmissions in body area networks , '' _ annals of telecommunications _ , vol .66 , pp . 213228 , 2011 .y. chen _et al . _ ,`` cooperative communications in ultra - wideband wireless body area networks : channel modeling and system diversity analysis , '' _ selected areas in communications , ieee journal on _ , vol .27 , no . 1 ,5 16 , jan . 2009 .j. dong and d. smith , `` cooperative body - area - communications : enhancing coexistence without coordination between networks , '' in _ personal , indoor and mobile radio communication ( pimrc ) , 2012 ieee _ , sydney , australia , september .a. zhang , d. smith , d. miniutti , l. hanlen , d. rodda , and b. gilbert , `` performance of piconet co - existence schemes in wireless body area networks , '' in _ wireless communications and networking conference ( wcnc ) , 2010 ieee _ , sydney , australia , apr .2010 , pp . 1 6 .r. annavajjala and j. zhang , `` level crossing rates and average outage durations of sinr with multiple co - channel interferers , '' in _ military communications conference , 2010 - milcom 2010 _ , 31 2010-nov . 3 2010 , pp .1233 1238 .d. smith , l. hanlen , and d. miniutti , `` transmit power control for wireless body area networks using novel channel prediction , '' in _ ieee wireless communications and networking conference ( wcnc _ , paris / france , april 2012 . | in this paper , a cooperative two - hop communication scheme , together with opportunistic relaying ( or ) , is applied within a mobile wireless body area network ( wban ) . its effectiveness in interference mitigation is investigated in a scenario where there are multiple closely - located networks . due to a typical wban s nature , no coordination is used among different wbans . a suitable time - division - multiple - access ( tdma ) is adopted as both an intra - network and also an inter - network access scheme . extensive on - body and off - body channel gain measurements are employed to gauge performance , which are overlaid to simulate a realistic wban working environment . it is found that opportunistic relaying is able to improve the signal - to - interference - and - noise ratio ( sinr ) threshold value at outage probability of 10% by an average of 5 db , and it is also shown that it can reduce level crossing rate ( lcr ) significantly at a low sinr threshold value . furthermore , this scheme is more efficient when on - body channels fade less slowly . |
providing an explicit definition of time is an extremely difficult endeavor , although it does seem to be intimately related to change , an idea reflected in aristotle s famous metaphor : _ time is the moving image of eternity _ . in fact , one may encounter many reflections and philosophical considerations on time over the ages , culminating in newton s notion of absolute time .newton stated that time flowed at the same rate for all observers in the universe .but , in 1905 , einstein changed altogether our notion of time .time flowed at different rates for different observers , and minkowski , three years later , formally united the parameters of time and space , giving rise to the notion of a four - dimensional entity , spacetime . adopting a pragmatic point of view , this assumption seems reasonable , as to measure time a changing configuration of matter is needed , i.e. , a swinging pendulum , etc .change seems to be imperative to have an emergent notion of time .therefore , time is empirically related to change . but change can be considered as a variation or sequence of occurrences .thus , intuitively , a sequence of successive occurrences provides us with a notion of something that flows , i.e. , it provides us with the notion of _ time_. time flows and everything relentlessly moves along this stream . in relativity , we can substitute the above empirical notion of a sequence of occurrences by a sequence of _we idealize the concept of an event to become a point in space and an instant in time . following this reasoning of thought , a sequence of events has a determined _ temporal order_. we experimentally verify that specific events occur before others and not vice - versa .certain events ( effects ) are triggered off by others ( causes ) , providing us with the notion of _causality_. thus , the conceptual definition and understanding of time , both quantitatively and qualitatively is of the utmost difficulty and importance .special relativity provides us with important quantitative elucidations of the fundamental processes related to time dilation effects .the general theory of relativity ( gtr ) provides a deep analysis to effects of time flow in the presence of strong and weak gravitational fields . as timeis incorporated into the proper structure of the fabric of spacetime , it is interesting to note that gtr is contaminated with non - trivial geometries which generate _ closed timelike curves _a closed timelike curve ( ctc ) allows time travel , in the sense that an observer which travels on a trajectory in spacetime along this curve , returns to an event which coincides with the departure .the arrow of time leads forward , as measured locally by the observer , but globally he / she may return to an event in the past .this fact apparently violates causality , opening pandora s box and producing time travel paradoxes , throwing a veil over our understanding of the fundamental nature of time .the notion of causality is fundamental in the construction of physical theories , therefore time travel and it s associated paradoxes have to be treated with great caution .the paradoxes fall into two broad groups , namely the _ consistency paradoxes _ and the _ causal loops_. the consistency paradoxes include the classical grandfather paradox .imagine traveling into the past and meeting one s grandfather . nurturing homicidal tendencies , the time traveler murders his grandfather , impeding the birth of his father , therefore making his own birth impossible .in fact , there are many versions of the grandfather paradox , limited only by one s imagination .the consistency paradoxes occur whenever possibilities of changing events in the past arise .the paradoxes associated to causal loops are related to self - existing information or objects , trapped in spacetime .imagine a time traveler going back to his past , handing his younger self a manual for the construction of a time machine .the younger version then constructs the time machine over the years , and eventually goes back to the past to give the manual to his younger self .the time machine exists in the future because it was constructed in the past by the younger version of the time traveler .the construction of the time machine was possible because the manual was received from the future .both parts considered by themselves are consistent , and the paradox appears when considered as a whole .one is liable to ask , what is the origin of the manual , for it apparently surges out of nowhere .there is a manual never created , nevertheless existing in spacetime , although there are no causality violations .an interesting variety of these causal loops was explored by gott and li , where they analyzed the idea of whether there is anything in the laws of physics that would prevent the universe from creating itself .thus , tracing backwards in time through the original inflationary state a region of ctcs may be encountered , giving _ no _ first - cause .a great variety of solutions to the einstein field equations ( efes ) containing ctcs exist , but , two particularly notorious features seem to stand out .solutions with a tipping over of the light cones due to a rotation about a cylindrically symmetric axis ; and solutions that violate the energy conditions of gtr , which are fundamental in the singularity theorems and theorems of classical black hole thermodynamics .a great deal of attention has also been paid to the quantum aspects of closed timelike curves . throughout this paper , we use the notation .it is interesting to note that the tipping over of light cones seems to be a generic feature of some solutions with a rotating cylindrical symmetry .the general metric for a stationary , axisymmetric solution with rotation is given by where is the distance along the axis of rotation ; is the angular coordinate ; is the radial coordinate ; and is the temporal coordinate .the metric components are only functions of the radial coordinate .note that the determinant , , is lorentzian provided that . due to the periodic nature of the angular coordinate , , an azimuthal curve with is a closed curve of invariant length .if is negative then the integral curve with fixed is a ctc . if , then the azimuthal curve is a closed null curve .now , consider a null azimuthal curve , not necessarily a geodesic nor closed , in the plane with fixed .the null condition , , implies with . solving the quadratic , we have due to the lorentzian signature constraint , , the roots are real .if then the light cones are tipped over sufficiently far to permit a trip to the past . by going once around the azimuthal direction ,the total backward time - jump for a null curve is if for even a single value of , the chronology - violation region covers the entire spacetime .thus , the tilting of light cones are generic features of spacetimes which contain ctcs , as depicted in fig . [ tipcones ] .the present section is far from making an exhaustive search of all the efe solutions generating ctcs with these features , but the best known spacetimes will be briefly analyzed , namely , the van stockum spacetime , the gdel universe , the spinning cosmic strings and the gott two - string time machine , which is a variation on the theme of the spinning cosmic string . the earliest solution to the efes containing ctcs , is probably that of the van stockum spacetime , which describes a stationary , cylindrically symmetric solution of a rapidly rotating infinite cylinder of dust , surrounded by vacuum .the centrifugal forces of the dust are balanced by the gravitational attraction .the metric , assuming the respective symmetries , takes the form of eq .( [ stationarymetric ] ) , and is required to be timelike at .the coordinates have the following domain the metric for the interior solution , where is the surface of the cylinder , is given by where is the angular velocity of the cylinder .it is immediate to verify that ctcs arise if , i.e. , for the azimuthal curves with fixed are ctcs .the condition is imposed .the causality violation region could be eliminated by requiring that boundary of the cylinder to be at .the interior solution would then be joined to an exterior solution , which would be causally well - behaved .the resulting upper bound to the `` velocity '' would be , although the orbits of the particles creating the field are timelike for all . applying the efe ,the energy density and -velocity of the dust are given by respectively .the coordinate system co - rotates with the dust .the source is simply positive density dust , implying that all of the energy condition are satisfied .van stockum developed a procedure which generates an exterior solution for all .consider the following range : 1 . .+ the exterior solution is given by the following functions with 2 . .,\\ & & m(r)=(r/2 ) \left [ 1+\ln \left(r / r \right ) \right ] , % \qquad % f(r)=(r / r ) \left [ 1-\ln \left(r / r \right ) \right ] \,.\end{aligned}\ ] ] 3 . . with as in the interior solution , , so that the metric signature is lorentzian for .one may show that the causality violation is avoided for , but in the region , ctcs appear .the causality violations arise from the sinusoidal factors of the metric components .the first zero of occurs at \,.\ ] ] thus , causality violation occur in the matter - free space surrounding a rapidly rotating infinite cylinder , as shown in figure [ fig : stockum ] .the van stockum spacetime is not asymptotically flat .but , the gravitational potential of the cylinder s newtonian analog also diverges at radial infinity . shrinking the cylinder down to a `` ring '' singularity ,one ends up with the kerr solution , which also has ctcs ( the causal structure of the kerr spacetime has been extensively analyzed by de felice and collaborators ) . in summary ,the van stockum solution contains ctc provided .the chronology - violating region covers the entire spacetime .reactions to the van stockum solution is that it is unphysical , as it applies to an infinitely long cylinder and it is not asymptotically flat .kurt gdel in discovered an exact solution to the efes of a uniformly rotating universe containing dust and a nonzero cosmological constant .the total energy - momentum is given by however , the latter may be expressed in terms of a perfect fluid , with rotation , energy density and pressure , in a universe with a zero cosmological constant , i.e. , with the following definitions the manifold is and the metric of the gdel solution is provided by the four - velocity and the vorticity of the fluid are , and , respectively .the einstein field equations provide the following stress - energy scenario : thus , the null , weak and dominant energy conditions are satisfied , while the dominant energy condition is in the imminence of being violated .note that the metric ( [ godelmetric ] ) is the direct sum of the metrics and .the metric is given by with the manifold defined by the coordinates .the metric is given by , with the manifold , defined by the coordinate . to analyze the causal properties of the solution, it is sufficient to consider .consider a set of alternative coordinates in , in which the rotational symmetry of the solution , around the axis , is manifest and suppressing the irrelevant coordinate , defined by &= & e^{-2r}\,\tan(\phi/2 ) \nonumber \,,\end{aligned}\ ] ] so that the metric ( [ godelmetric1 ] ) takes the form \,.\ ] ] moving away from the axis , the light cones open out and tilt in the -direction .the azimuthal curves with are ctcs if the condition is satisfied .it is interesting to note that in the gdel spacetime , closed timelike curves are not geodesics .however , novello and rebouas discovered a new generalized solution of the gdel metric , of a shear - free nonexpanding rotating fluid , in which successive concentric causal and noncausal regions exist , with closed timelike curves which are geodesics .a complete study of geodesic motion in gdel s universe , using the method of the effective potential was further explored by novello _et al _ .much interest has been aroused in time travel in the gdel spacetime , from which we may mention the analysis of the geodesical and non - geodesical motions considered by pfarr and malament .the string spacetime is assumed to be static and cylindrically symmetric , with the string lying along the axis of symmetry .the most general static , cylindrically symmetric metric has the form where , and are functions of . and are identified .suppose that the string has a uniform density , out to some cylindrical radius .the end results will prove to be independent of , so that the string s transverse dimensions may be reduced to zero , yielding an unambiguous exact exterior metric for the string .the stress - energy tensor of the string , in an orthonormal frame , is given by and all the other components are equal to zero , for . the resulting efes are given by : = 8\pi \,\epsilon \ , , \label{stringefett } \\ & & e^{-2\lambda}\,\left(\lambda'\,\phi'+\nu'\,\lambda'+\lambda'\,\phi ' \right ) = 0 \ , , \label{stringeferr } \\ & & e^{-2\lambda}\,\left[\lambda''+\nu''+(\nu')^2 \right ] = 0 \ , , \label{stringefethetatheta } \\ & & e^{-2\lambda}\,\left[-\lambda'\,\phi'-\nu'\,\lambda'+\phi '' + ( \phi')^2+\nu''+(\nu')^2+\nu'\,\phi ' \right ] = -8\pi\,\epsilon \ , , \label{stringefezz}\end{aligned}\ ] ] where the prime denotes a derivative with respect to .these are non - linear equations for the metric functions , and are easily solved in the case of the uniform density string .conservation of the stress - energy , , yields this implies that through eq .( [ stringefethetatheta ] ) , and are constant , and may be set to zero by an appropriate rescaling of the coordinates , and .equation ( [ stringeferr ] ) is then satisfied automatically and eqs .( [ stringefett ] ) and ( [ stringefezz ] ) become identical , i.e. , substituting , i.e. , , yields where .the metric on the axis will be flat , i.e. , no cone singularity , if and .thus , the interior metric of a uniform - density string is then given by the exterior metric for the string spacetime must be a static , cylindrically symmetric , vacuum solution of the efes .the most general solution , discovered by levi - civita is given \label{extstring}\,,\ ] ] where and are freely chosen constants .the string is lorentz invariant in the -direction .requiring that the metric ( [ extstring ] ) be lorentz invariant in the -direction restricts the values of , namely , and .one may now join the interior and exterior metrics together along the surface of the string at and .the darmois - israel junction conditions require that the intrinsic metrics induced on the junction surface by the interior and exterior metrics be identical , and that the discontinuity in the extrinsic curvature of the surface be related to the surface stress - energy .consider the flat exterior case .the intrinsic metric can then be matched by requiring , and .the latter condition provides calculating the extrinsic curvature tensors and equating them to each other , so as to have no surface stress - energy present , one obtains the following relation combining this with the intrinsic metric constraint , eq .( [ stringintrinsic ] ) , to eliminate , yields the exterior metric of the string is then given by eq .( [ extstring ] ) with , and given by eq .( [ def : a ] ) .the concept of a mass per unit length for a cylindrically symmetric source in general relativity is not unambiguously defined , unlike the case of spherical symmetry . for a static ,cylindrically symmetric spacetime , a useful simple definition is to integrate the energy - density , over the proper volume of the source , i.e. , the string .the mass per unit length , or linear energy - density , is given by \,,\ ] ] which , taking into account , reduces to thus , the exact exterior metric is given by which will be used below in the gott cosmic string spacetime .an extremely elegant model of a time - machine was constructed by gott .the gott time - machine is an exact solution of the efe for the general case of two moving straight cosmic strings that do not intersect .this solution produces ctcs even though they do not violate the wec , have no singularities and event horizons , and are not topologically multiply - connected as the wormhole solution ( see below ) .the appearance of ctcs relies solely on the gravitational lens effect and the relativity of simultaneity .we follow the analysis of ref . closely throughout this section .the exterior metric of a straight cosmic string is given by eq .( [ exactextstring ] ) .the geometry of a , section of this solution is that of a cone with an angle deficit in the exterior ( vacuum ) region . applying a new coordinate ,the exterior metric becomes where .the above metric is the metric for minkowski space in cylindrical coordinates where a wedge of angle deficit is missing , and points with coordinates and are identified .now , the static solution for two parallel cosmic strings separated by a distance is constructed in the following manner .consider the metric ( [ exactextstring ] ) , by replacing the angular and radial coordinates , and , respectively by the cartesian coordinates , and .this reduces the metric to , with the following restrictions : and the points with are identified .the -surface , has the metric with zero intrinsic and extrinsic curvature , as it is part of a -dimensional minkowski spacetime .it is thus possible to produce a mirror - image copy of the region , including the interior solution , by joining it along the three - surface .this second solution lies in the region .the two copies obey all the matching conditions along the surface , because the latter is a -dimensional minkowskian spacetime with zero intrinsic and extrinsic curvature .see figure [ gotttm ] for details .plane , width=326 ] consider now two observers and at rest with respect to the cosmic strings with world lines given by respectively .it is possible to prove that observer sees three images of the observer .the central image is from a geodesic passing through the origin , the two outer images , which are displaced from the central image by an angle on each side , represent geodesics that pass through events and .note that and are identified , as are and .considering the following trigonometric relationship it is simple to verify that the value of to minimize is .thus , we have if , and the light beam going through with a gravitational lensing time delay between the two images of .note that if a light beam traversing through can beat a light beam traveling through , then so can a spaceship traveling at a high enough velocity , , relative to the string .the spaceship connects two events in the -dimensional minkowski spacetime with a spacelike separation .let the spaceship begin at and end at , given by the following events respectively .the time for the spaceship to traverse from , through to is .the separation of and is spacelike providing that , which can always be verified for high enough , for .the following step is to give the solution a boost with velocity in the -direction via a lorentz transformation such that and become simultaneous in the laboratory frame .the velocity for the simultaneity to occur is .analogously , we give the solution a boost with velocity in the -direction .the two solutions and may still be matched together because the lorentz transformations do not alter the fact that the boundary surface in each solution is still a -dimensional minkowskian spacetime with zero intrinsic and extrinsic curvature .the spaceship goes from through and arrives at , which is simultaneous in the laboratory frame . by symmetry, the spaceship travels in the opposite direction past the oppositely moving string through and arrives back at event , which is also simultaneous with in the laboratory frame .the spaceship has completed a ctc , as it encircles the two parallel cosmic strings as they pass each other in a sense opposite to that of the strings relative motion . in principle , it is also possible to find a reference frame in which the spaceship arrives at before it s departure .the events in the laboratory frame have the following coordinates : and with and since , we have or considering the following approximations , , we have or simply for expected for grand unified cosmic strings , we have in order to produce ctcs . in the laboratory frameit is clear how the ctc is created .the and identifications allow the particle to effectively travel backward in time twice in the laboratory frame .the identifications of and is equivalent to having a complete minkowski spacetime without the missing wedges where instantaneous , tachyonic , travel in the string rest frames between and , and , is possible .it is also interesting to verify whether the ctcs in the gott solution appear at some particular moment , i.e. , when the strings approach each other s neighborhood , or if they already pre - exist , i.e. , they intersect any spacelike hypersurface .these questions are particularly important in view of hawking s chronology protection conjecture .this conjecture states that the laws of physics prevent the creation of ctcs .if correct , then the solutions of the efe which admit ctcs are either unrealistic or are solutions in which the ctcs are pre - existing , so that the time -machine is not created by dynamical processes .amos ori proved that in gott s spacetime , ctcs intersect every hypersurface , so that it is not a counter - example to the chronology protection conjecture .the global structure of the gott spacetime was further explored by cutler , and it was shown that the closed timelike curves are confined to a certain region of the spacetime , and that the spacetime contains complete spacelike and achronal hypersurfaces from which the causality violating regions evolve .grant also examined the global structure of the two - string spacetime and found that away from the strings , the space is identical to a generalized misner space .the vacuum expectation value of the energy - momentum tensor for a conformally coupled scalar field was then calculated on the respective generalized misner space , which was found to diverge weakly on the chronology horizon , but diverge strongly on the polarized hypersurfaces .thus , the back reaction due to the divergent behavior around the polarized hypersurfaces are expected to radically alter the structure of spacetime , before quantum gravitational effects become important , suggesting that hawking s chronology protection conjecture holds for spaces with a noncompactly generated chronology horizon .soon after , laurence showed that the region containing ctcs in gott s two - string spacetime is identical to the regions of the generalized misner space found by grant , and constructed a family of isometries between both gott s and grant s regions .this result was used to argue that the slowly diverging vacuum polarization at the chronology horizon of the grant space carries over without change to the gott space .furthermore , it was shown that the gott time machine is unphysical in nature , for such an acausal behavior can not be realized by physical and timelike sources .consider an infinitely long straight string that lies and spins around the -axis .the symmetries are analogous to the van stockum spacetime , but the asymptotic behavior is different .we restrict the analysis to an infinitely long straight string , with a delta - function source confined to the -axis .it is characterized by a mass per unit length , ; a tension , , and an angular momentum per unit length , . for cosmic strings ,the mass per unit length is equal to the tension , . in cylindrical coordinates the metric takes the following form ^ 2+dr^2+(1 - 4\mu)^2\,r^2\;d\varphi^2+dz^2 \,,\ ] ] with the following coordinate range adopting a new set of coordinates the metric may be rewritten as with a new coordinate range subject to the following identifications \,.\ ] ] outside the core , the metric is locally flat , i.e. , the riemann tensor is zero .the geometry is that of flat minkowski spacetime subject to a somewhat peculiar set of identifications . on traveling once around the the string, one sees that the spatial slices are `` missing '' a wedge of angle , which defines the deficit angle . on traveling once around the string ,one undergoes a backward time - jump of consider an azimuthal curve , i.e. , an integral curve of .closed timelike curves appear whenever these ctcs can be deformed to cover the entire spacetime , consequently , the chronology - violating region covers the entire manifold .the traditional manner of solving the efes , , consists in considering a plausible stress - energy tensor , , and finding the geometrical structure , .but one can run the efe in the reverse direction by imposing an exotic metric , and eventually finding the matter source for the respective geometry . in this fashion ,solutions violating the energy conditions have been obtained . adopting the reverse philosophy , solutions such as traversable wormholes , the warp drive , the krasnikov tube and the ori - soen spacetimehave been obtained .these solutions violate the energy conditions and with simple manipulations generate ctcs .much interest has been aroused in traversable wormholes since the classical article by morris and thorne .a wormhole is a hypothetical tunnel which connects different regions in spacetime .these solutions are multiply - connected and probably involve a topology change , which by itself is a problematic issue .consider the following spherically symmetric and static wormhole solution where and are arbitrary functions of the radial coordinate . is denoted the redshift function , for it is related to the gravitational redshift , and is denoted the shape function , because as can be shown by embedding diagrams , it determines the shape of the wormhole .the coordinate is non - monotonic in that it decreases from to a minimum value , representing the location of the throat of the wormhole , where , and then it increases from to .a fundamental property of a wormhole is that a flaring out condition of the throat , given by , is imposed , and at the throat , the condition is imposed to have wormhole solutions .it is precisely these restrictions that impose the nec violation in classical general relativity .another condition that needs to be satisfied is . for the wormhole to be traversable, one must demand that there are no horizons present , which are identified as the surfaces with , so that must be finite everywhere .several candidates have been proposed in the literature , amongst which we refer to solutions in higher dimensions , for instance in einstein - gauss - bonnet theory , wormholes on the brane ; solutions in brans - dicke theory ; wormholes constructed in gravity ; wormhole solutions in semi - classical gravity ( see ref . and references therein ) ; exact wormhole solutions using a more systematic geometric approach were found ; wormhole solutions and thin shells ; geometries supported by equations of state responsible for the cosmic acceleration ; spherical wormholes were also formulated as an initial value problem with the throat serving as an initial value surface ; solutions in conformal weyl gravity were found , and thin accretion disk observational signatures were also explored , etc ( see refs . for more details and for a recent review ) .one of the most fascinating aspects of wormholes is their apparent ease in generating ctcs .there are several ways to generate a time machine using multiple wormholes , but a manipulation of a single wormhole seems to be the simplest way .the basic idea is to create a time shift between both mouths .this is done invoking the time dilation effects in special relativity or in general relativity , i.e. , one may consider the analogue of the twin paradox , in which the mouths are moving one with respect to the other , or simply the case in which one of the mouths is placed in a strong gravitational field . to create a time shift using the twin paradox analogue ,consider that the mouths of the wormhole may be moving one with respect to the other in external space , without significant changes of the internal geometry of the handle .for simplicity , consider that one of the mouths is at rest in an inertial frame , whilst the other mouth , initially at rest practically close by to , starts to move out with a high velocity , then returns to its starting point .due to the lorentz time contraction , the time interval between these two events , , measured by a clock comoving with can be made to be significantly shorter than the time interval between the same two events , , as measured by a clock resting at .thus , the clock that has moved has been slowed by relative to the standard inertial clock .note that the tunnel ( handle ) , between and remains practically unchanged , so that an observer comparing the time of the clocks through the handle will measure an identical time , as the mouths are at rest with respect to one another .however , by comparing the time of the clocks in external space , he will verify that their time shift is precisely , as both mouths are in different reference frames , frames that moved with high velocities with respect to one another .now , consider an observer starting off from at an instant , measured by the clock stationed at .he makes his way to in external space and enters the tunnel from .consider , for simplicity , that the trip through the wormhole tunnel is instantaneous .he then exits from the wormhole mouth into external space at the instant as measured by a clock positioned at .his arrival at precedes his departure , and the wormhole has been converted into a time machine .see figure [ fig : wh - time - machine ]. for concreteness , following the morris _et al _ analysis , consider the metric of the accelerating wormhole given by where the proper radial distance , , is used . is a form function that vanishes at the wormhole mouth , at , rising smoothly from 0 to 1 , as one moves to mouth ; is the acceleration of mouth as measured in its own asymptotic rest frame .consider that the external metric to the respective wormhole mouths is .thus , the transformation from the wormhole mouth coordinates to the external lorentz coordinates is given by for mouth , where is the time - independent location of the wormhole mouth , and for the accelerating wormhole mouth .the world line of the center of mouth is given by and with ; is the velocity of mouth and the respective lorentz factor ; the acceleration appearing in the wormhole metric is given .novikov considered other variants of inducing a time shift through the time dilation effects in special relativity , by using a modified form of the metric ( [ accerelatedwh ] ) , and by considering a circular motion of one of the mouths with respect to the other .another interesting manner to induce a time shift between both mouths is simply to place one of the mouths in a strong external gravitational field , so that times slows down in the respective mouth .the time shift will be given by .at the wormhole throat is marked off , and note that identical values are the same event as seen through the wormhole handle . in figure , mouth remains at rest , while mouth accelerates from at a high velocity , then returns to its starting point at rest .a time shift is induced between both mouths , due to the time dilation effects of special relativity .the light cone - like hypersurface shown is a cauchy horizon . through every event to the future of thereexist ctcs , and on the other hand there are no ctcs to the past of . in figure ,a time shift between both mouths is induced by placing mouth in strong gravitational field .see text for details.,title="fig:",width=249 ] at the wormhole throat is marked off , and note that identical values are the same event as seen through the wormhole handle . in figure , mouth remains at rest , while mouth accelerates from at a high velocity , then returns to its starting point at rest .a time shift is induced between both mouths , due to the time dilation effects of special relativity .the light cone - like hypersurface shown is a cauchy horizon . through every event to the future of thereexist ctcs , and on the other hand there are no ctcs to the past of . in figure ,a time shift between both mouths is induced by placing mouth in strong gravitational field .see text for details.,title="fig:",width=230 ] a time - machine model was also proposed by amos ori and yoav soen which significantly ameliorates the conditions of the efe s solutions which generate ctcs .the ori - soen model presents some notable features .it was verified that ctcs evolve , within a bounded region of space , from a well - defined initial slice , a partial cauchy surface , which does not display causality violation .the partial cauchy surface and spacetime are asymptotically flat , contrary to the gott spacetime , and topologically trivial , contrary to the wormhole solutions .the causality violation region is constrained within a bounded region of space , and not in infinity as in the gott solution .the wec is satisfied up until and beyond a time slice , on which the ctcs appear .more recently , ori presented a class of curved - spacetime vacuum solutions which develop closed timelike curves at some particular moment .these vacuum solutions were then used to construct a time - machine model .the causality violation occurs inside an empty torus , which constitutes the time - machine core .the matter field surrounding this empty torus satisfies the weak , dominant , and strong energy conditions .the model is regular , asymptotically flat , and topologically trivial , although stability still remains the main open question . within the framework of general relativity ,it is possible to warp spacetime in a small _ bubblelike _ region , in such a way that the bubble may attain arbitrarily large velocities , . inspired in the inflationary phase of the early universe , the enormous speed of separation arises from the expansion of spacetime itself .the model for hyperfast travel is to create a local distortion of spacetime , producing an expansion behind the bubble , and an opposite contraction ahead of it ( see also ) . in the alcubierre warpdrive the spacetime metric is ^ 2 \label{cartesianwarpmetric}\,.\ ] ] the form function possesses the general features of having the value in the exterior and in the interior of the bubble .the general class of form functions , , chosen by alcubierre was spherically symmetric : with .then ^ 2+x^2+y^2\right\}^{1/2}.\ ] ] consider the following form -\tanh\left[\sigma(r - r)\right]}{2\tanh(\sigma r)}\ , , \label{e : form}\ ] ] in which and are two arbitrary parameters . is the `` radius '' of the warp - bubble , and can be interpreted as being inversely proportional to the bubble wall thickness .if is large , the form function rapidly approaches a _ top hat _ function , i.e. , ,\\ 0 , & { \rm if}\ ; r\in(r,\infty ) .\end{array } \right.\ ] ] it can be shown that observers with the four velocity move along geodesics , as their -acceleration is zero , _i.e. _ , . the spaceship , which in the original formulation is treated as a test particle which moves along the curve , can easily be seen to always move along a timelike curve , regardless of the value of .one can also verify that the proper time along this curve equals the coordinate time , by simply substituting in eq .( [ cartesianwarpmetric ] ) .this reduces to , taking into account and .consider a spaceship placed within the alcubierre warp bubble .the expansion of the volume elements , , is given by .taking into account eq .( [ e : form ] ) , we have ( for alcubierre s version of the warp bubble ) the center of the perturbation corresponds to the spaceship s position .the volume elements are expanding behind the spaceship , and contracting in front of it , as shown in figure [ alcubierre - expansion ] .one may consider a hypothetical spaceship immersed within the bubble , moving along a timelike curve , regardless of the value of . due to the arbitrary value of the warp bubble velocity , the metric of the warp drive permits superluminal travel , which raises the possibility of the existence of ctcs .although the solution deduced by alcubierre by itself does not possess ctcs , everett demonstrated that these are created by a simple modification of the alcubierre metric , by applying a similar analysis as in tachyons .the modified metric takes the form with ^{1/2 } \,.\end{aligned}\ ] ] the spacetime is flat in the exterior of a warp bubble with radius , but now in the modified version is centered in .the bubble moves with a velocity , on a trajectory parallel with the -axis .one may for simplicity consider the form function given by eq .( [ e : form ] ) . we shall also impose that , so that the form function is negligible , i.e. , .now , consider two stars , and , at rest in the coordinate system of the metric ( [ modwarpmetric ] ) , and located on the -axis at and , respectively . the metric along the -axis is minkowskian as .therefore , a light beam emitted at , at , moving along the -axis with , arrives at at .suppose that the spaceship initially starts off from , with , moving off to a distance along the and neglecting the time it needs to cover to .at , it is then subject to a uniform acceleration , , along the the for , and for .the spaceship will arrive at the spacetime event with coordinates and .once again , the time required to travel from to is negligible . the separation between the two events , departure and arrival is and will be spatial if the following condition is verified in this case , the spaceship will arrive at before the light beam , if the latter s trajectory is a straight line , and both departures are simultaneous from .inertial observers situated in the exterior of the spaceship , at and , will consider the spaceship s movement as superluminal , since the distance is covered in an interval .however , the spaceship s wordline is contained within it s light cone .the worldline of the spaceship is given by , while it s future light cone is given by .the latter relation can easily be inferred from the null condition , .since the quadri - vector with components is spatial , the temporal order of the events , departure and arrival , is not well - defined . introducing new coordinates , , obtained by a lorentz transformation , with a boost along the -axis .the arrival at in the coordinates correspond to with .the events , departure and arrival , will be simultaneous if . the arrival will occur before the departure if , i.e. , the fact that the spaceship arrives at with , does not by itself generate ctcs .consider the metric , eq .( [ modwarpmetric ] ) , substituting and by and , respectively ; by ; by ; and by .this new metric describes a spacetime in which an alcubierre bubble is created at , which moves along and , from to with a velocity , and subject to an acceleration . for observers at rest relatively to the coordinates , situated in the exterior of the second bubble , it is identical to the bubble defined by metric , eq . ( [ modwarpmetric ] ) , as it is seen by inertial observers at rest at and .the only differences reside in a change of the origin , direction of movement and possibly of the value of acceleration .the stars , and , are st rest in the coordinate system of the metric , eq .( [ modwarpmetric ] ) , and in movement along the negative direction of the -axis with velocity , relatively to the coordinates .the two coordinate systems are equivalent due to the lorentz invariance , so if the first is physically realizable , then so is the second . in the new metric , by analogy with eq .( [ modwarpmetric ] ) , we have , i.e. , the proper time of the observer , on board of the spaceship , traveling in the center of the second bubble , is equal to the time coordinate , .the spaceship will arrive at in the temporal and spatial intervals given by and , respectively . as in the analysis of the first bubble, the separation between the departure , at , and the arrival , will be spatial if the analogous relationship of eq.([spatialcharacter ] ) is verified .therefore , the temporal order between arrival and departure is also not well - defined .as will be verified below , when and decrease and increases , will decrease and a spaceship will arrive at at . in fact , one may prove that it may arrive at . since the objective is to verify the appearance of ctcs , in principle ,one may proceed with some approximations . for simplicity , consider that and , and consequently and are enormous , so that and . in this limit, we obtain the approximation , i.e. , the journey of the first bubble from to is approximately instantaneous. consequently , taking into account the lorentz transformation , we have and . to determine , which corresponds to the second bubble at , consider the following considerations :since the acceleration is enormous , we have and , therefore and , from which one concludes that krasnikov discovered an interesting feature of the warp drive , in which an observer in the center of the bubble is causally separated from the front edge of the bubble .therefore he / she can not control the alcubierre bubble on demand .krasnikov proposed a two - dimensional metric , which was later extended to a four - dimensional model .one krasnikov tube in two dimensions does not generate ctcs .but the situation is quite different in the 4-dimensional generalization , which we present for self - consistency and self - completeness .soon after the krasnikov two - dimensional solution , everett and roman generalized the analysis to four dimensions , denoting the solution as the _ krasnikov tube_. consider that the 4-dimensional modification of the metric begins along the path of the spaceship , which is moving along the -axis , occurring at position at time , the time of passage of the spaceship .also assume that the disturbance in the metric propagates radially outward from the -axis , so that causality guarantees that at time the region in which the metric has been modified can not extend beyond , where .the modification in the metric should also not extend beyond some maximum radial distance from the -axis .thus , the metric in the 4-dimensional spacetime , written in cylindrical coordinates , is given by with \ , .\label{4d : form}\ ] ] for one has a tube of radius centered on the -axis , within which the metric has been modified .this structure is denoted by the _krasnikov tube_. in contrast with the alcubierre spacetime metric , the metric of the krasnikov tube is static once it has been created .the stress - energy tensor element given by \,,\ ] ] can be shown to be the energy density measured by a static observer , and violates the wec in a certain range of , i.e. , . to verify the violation of the wec , consider the energy density in the middle of the tube and at a time long after it s formation , i.e. , and , respectively . in this regionwe have , and . with this simplificationthe form function , eq . ( [ 4d : form ] ) , reduces to consider the following specific form for given by + 1 \right \ } \,,\ ] ] so that the form function of eq .( [ 4d : midtube - form ] ) is provided by + 1 \right \ } \,.\ ] ] choosing the following values for the parameters : , and , it can be shown that the negative character of the energy density is manifest in the immediate inner vicinity of the tube wall .now , using two such tubes it is a simple matter , in principle , to generate ctcs .the analysis is similar to that of the warp drive , so that it will be treated in summary .imagine a spaceship traveling along the -axis , departing from a star , , at , and arriving at a distant star , , at .an observer on board of the spaceship constructs a krasnikov tube along the trajectory .it is possible for the observer to return to , traveling along a parallel line to the -axis , situated at a distance , so that , in the exterior of the first tube . on the return trip, the observer constructs a second tube , analogous to the first , but in the opposite direction , i.e. , the metric of the second tube is obtained substituting and , for and , respectively in eq .( [ 4d - krasnikov - metric ] ) .the fundamental point to note is that in three spatial dimensions it is possible to construct a system of two non - overlapping tube separated by a distance .after the construction of the system , an observer may initiate a journey , departing from , at and .one is only interested in the appearance of ctcs in principle , therefore the following simplifications are imposed : and are infinitesimal , and the time to travel between the tubes is negligible . for simplicity , consider the velocity of propagation close to that of light speed. using the second tube , arriving at at and , then travelling through the first tube , the observer arrives at at .the spaceship has completed a ctc , arriving at before it s departure .gtr has been an extremely successful theory , with a well established experimental footing , at least for weak gravitational fields .it s predictions range from the existence of black holes , gravitational radiation to the cosmological models , predicting a primordial beginning , namely the big - bang .however , it was seen that it is possible to find solutions to the efes , with certain ease , which generate ctcs .this implies that if we consider gtr valid , we need to include the _ possibility _ of time travel in the form of ctcs .a typical reaction is to exclude time travel due to the associated paradoxes .but the paradoxes do not prove that time travel is mathematically or physically impossible .consistent mathematical solutions to the efes have been found , based on plausible physical processes .what they do seem to indicate is that local information in spacetimes containing ctcs are restricted in unfamiliar ways . the grandfather paradox , without doubt ,does indicate some strange aspects of spacetimes that contain ctcs .it is logically inconsistent that the time traveler murders his grandfather .but , one can ask , what exactly impeded him from accomplishing his murderous act if he had ample opportunities and the free - will to do so .it seems that certain conditions in local events are to be fulfilled , for the solution to be globally self - consistent .these conditions are denominated _ consistency constraints _ . to eliminate the problem of free - will , mechanical systems were developed as not to convey the associated philosophical speculations on free - will .much has been written on two possible remedies to the paradoxes , namely the principle of self - consistency and the chronology protection conjecture .one current of thought , led by igor novikov , is the principle of self - consistency , which stipulates that events on a ctc are self - consistent , i.e. , events influence one another along the curve in a cyclic and self - consistent way . in the presence of ctcs the distinction between past and future eventsare ambiguous , and the definitions considered in the causal structure of well - behaved spacetimes break down .what is important to note is that events in the future can influence , but can not change , events in the past .the principle of self - consistency permits one to construct local solutions of the laws of physics , only if these can be prolonged to a unique global solution , defined throughout non - singular regions of spacetime .therefore , according to this principle , the only solutions of the laws of physics that are allowed locally , reinforced by the consistency constraints , are those which are globally self - consistent .hawking s chronology protection conjecture is a more conservative way of dealing with the paradoxes .hawking notes the strong experimental evidence in favor of the conjecture from the fact that `` we have not been invaded by hordes of tourists from the future '' .an analysis reveals that the value of the renormalized expectation quantum stress - energy tensor diverges in the imminence of the formation of ctcs .this conjecture permits the existence of traversable wormholes , but prohibits the appearance of ctcs .the transformation of a wormhole into a time machine results in enormous effects of the vacuum polarization , which destroys it s internal structure before attaining the planck scale .nevertheless , li has shown given an example of a spacetime containing a time machine that might be stable against vacuum fluctuations of matter fields , implying that hawking s suggestion that the vacuum fluctuations of quantum fields acting as a chronology protection might break down .there is no convincing demonstration of the chronology protection conjecture , but the hope exists that a future theory of quantum gravity may prohibit ctcs .visser still considers the possibility of two other conjectures .the first is the radical reformulation of physics conjecture , in which one abandons the causal structure of the laws of physics and allows , without restriction , time travel , reformulating physics from the ground up .the second is the boring physics conjecture , in which one simply ceases to consider the solutions to the efes generating ctcs .perhaps an eventual quantum gravity theory will provide us with the answers .but , as stated by thorne , it is by extending the theory to it s extreme predictions that one can get important insights to it s limitations , and probably ways to overcome them .therefore , time travel in the form of ctcs , is more than a justification for theoretical speculation , it is a conceptual tool and an epistemological instrument to probe the deepest levels of gtr and extract clarifying views .fsnl acknowledges partial financial support of the fundao para a cincia e tecnologia through the grants ptdc / fis/102742/2008 and cern / fp/109381/2009 .f. lobo and p. crawford , `` time , closed timelike curves and causality , '' in the nature of time : geometry , physics and perception , nato science series ii .mathematics , physics and chemistry - vol . * 95 * , kluwer academic publishers , r. buccheri et al .eds , pp.289 - 296 ( 2003 ) [ arxiv : gr - qc/0206078 ] .f. lobo and p. crawford , `` weak energy condition violation and superluminal travel , '' current trends in relativistic astrophysics , theoretical , numerical , observational , lecture notes in physics * 617 * , springer - verlag publishers , l. fernndez et al .eds , pp . 277291 ( 2003 ) [ arxiv : gr - qc/0204038 ] .j. p. s. lemos , f. s. n. lobo and s. q. de oliveira , phys .d * 68 * , 064004 ( 2003 ) .f. j. tipler , phys .lett * 37 * , 879 - 882 ( 1976 ) .g. dotti , j. oliva , and r. troncoso , phys .d * 75 * , 024002 ( 2007 ) ; h. maeda and m. nozawa , phys .d * 78 * , 024005 ( 2008 ) .l. a. anchordoqui and s. e. p bergliaffa , phys .d * 62 * , 067502 ( 2000 ) ; k. a. bronnikov and s .- w .kim , phys .d * 67 * , 064027 ( 2003 ) ; m. la camera , phys .b573 * , 27 - 32 ( 2003 ) ; f. s. n. lobo , phys . rev . *d75 * , 064027 ( 2007 ) .k. k. nandi , b. bhattacharjee , s. m. k. alam and j. evans , phys .d * 57 * , 823 ( 1998 ) ; l. a. anchordoqui , s. e. perez bergliaffa and d. f. torres , phys .d * 55 * , 5226 ( 1997 ) ; a. g. agnese and m. la camera , phys .d * 51 * , 2011 ( 1995 ) ; k. k. nandi , a. islam and j. evans , phys .d * 55 * , 2497 ( 1997 ) ; a. bhattacharya , i. nigmatzyanov , r. izmailov and k. k. nandi , class .* 26 * , 235017 ( 2009 ) ; f. s. n. lobo and m. a. oliveira , phys .d * 81 * , 067501 ( 2010 ) .f. s. n. lobo and m. a. oliveira , phys .d * 80 * , 104012 ( 2009 ) .r. garattini and f. s. n. lobo , class .* 24 * , 2401 ( 2007 ) ; r. garattini and f. s. n. lobo , phys .b * 671 * , 146 ( 2009 ) . c. g. boehmer , t. harko and f. s. n. lobo , phys .d * 76 * , 084014 ( 2007 ) ; c. g. boehmer , t. harko and f. s. n. lobo , class .grav . * 25 * , 075016 ( 2008 ) .f. s. n. lobo and p. crawford , class .. grav .* 21 * , 391 ( 2004 ) ; f. s. n. lobo and p. crawford , class .* 22 * , 4869 ( 2005 ) ; j. p.s. lemos and f. s. n. lobo , phys .d * 69 * , 104007 ( 2004 ) ; f. s. n. lobo , gen .* 37 * , 2023 ( 2005 ) ; j. p. s. lemos and f. s. n. lobo , phys .d * 78 * , 044030 ( 2008 ) .s. sushkov , phys .d * 71 * , 043520 ( 2005 ) ; f. s. n. lobo , phys .rev . * d71 * , 084011 ( 2005 ) ; f. s. n. lobo , phys .rev . * d71 * , 124022 ( 2005 ) ; j. a. gonzalez , f. s. guzman , n. montelongo - garcia and t. zannias , phys .d * 79 * , 064027 ( 2009 ) ; a. debenedictis , r. garattini and f. s. n. lobo , phys .d * 78 * , 104003 ( 2008 ) ; f. s. n. lobo , phys .rev . * d73 * , 064028 ( 2006 ) ; n. m. garcia and t. zannias , phys .d * 78 * , 064003 ( 2008 ) ; f. s. n. lobo , phys .d * 75 * , 024023 ( 2007 ) .n. montelongo garcia and t. zannias , class .* 26 * , 105011 ( 2009 ) .f. s. n. lobo , class .grav . * 25 * , 175006 ( 2008 ) .t. harko , z. kovacs and f. s. n. lobo , phys .d * 78 * , 084005 ( 2008 ) ; t. harko , z. kovacs and f. s. n. lobo , phys .d * 79 * , 064001 ( 2009 ) .j. p. s. lemos ,f. s. n. lobo and s. quinet de oliveira , phys .d * 68 * , 064004 ( 2003 ) .f. s. n. lobo , `` exotic solutions in general relativity : traversable wormholes and warp drive spacetimes , '' arxiv:0710.4474 [ gr - qc ]. m. s. morris , k. s. thorne and u. yurtsever , `` wormholes , time machines and the weak energy condition , '' phy . rev . lett . * 61 * , 1446 ( 1988 ) . j. l. friedman , m. s. morris , i. d. novikov , f. echeverria , g. klinkhammer , k. s. thorne , and u. yurtsever , phys .d * 42 * , 1915 - 1930 ( 1990 ) ; m. visser , in the future of theoretical physics and cosmology , cambridge university press , edited by g. gibbons _et al _ , pp.161 - 176 ( 2003 ) . m. visser , phys . rev .d * 41 * 1116 ( 1990 ) .k. d. olum , phys .d * 61 * 124022 ( 2002 ) .a. ori , phys .lett . * 95 * , 021101 ( 2005 ) .m. alcubierre , class .. grav . * 11 * , l73-l77 ( 1994 ) . j. natrio , class .. grav . * 19 * , 1157 , ( 2002 ) ; f. s. n. lobo and m. visser , class .* 21 * , 5871 ( 2004 ) ; d. h. coule , class .15 * , 2523 - 2527 ( 1998 ) ; w. a. hiscock , class .* 14 * , l183 ( 1997 ) ; c. clark , w. a. hiscock and s. l. larson , class .* 16 * , 3965 ( 1999 ) ; p. f. gonzlez - daz , phys .d * 62 * , 044005 ( 2000 ) ; c. van den broeck , class .* 16 * , 3973 , ( 1999 ) .a. e. everett , phys .d * 53 * 7365 ( 1996 ) .s. v. krasnikov , phys .d * 57 * , 4760 ( 1998 ) .a. e. everett and t. a. roman , phys .d * 56 * , 2100 ( 1997 ) .j. earman , _ bangs , crunches , whimpers , and shrieks : singularities and acausalities in relativistic spacetimes _ , oxford university press ( 1995 ) .a. carlini , v.p .frolov , m. b. mensky , i. d. novikov and h. h. soleng , int . j. mod .d * 4 * 557 ( 1995 ) ; erratum - ibid d*5 * 99 ( 1996 ) .a. carlini and i. d. novikov , int . j. mod .d * 5 * 445 ( 1996 ) .li , phys .d * 50 * , r6037 ( 1994 ) .k. s. thorne , in _ general relativity and gravitation _ , proceedings of the 13th conference on general relativity and gravitation , edited by r. j. gleiser et al ( institute of physics publishing , bristol , 1993 ) , p. 295 . | the conceptual definition and understanding of time , both quantitatively and qualitatively is of the utmost difficulty and importance . as time is incorporated into the proper structure of the fabric of spacetime , it is interesting to note that general relativity is contaminated with non - trivial geometries which generate _ closed timelike curves_. a closed timelike curve ( ctc ) allows time travel , in the sense that an observer that travels on a trajectory in spacetime along this curve , may return to an event before his departure . this fact apparently violates causality , therefore time travel and it s associated paradoxes have to be treated with great caution . the paradoxes fall into two broad groups , namely the _ consistency paradoxes _ and the _ causal loops_. a great variety of solutions to the einstein field equations containing ctcs exist and it seems that two particularly notorious features stand out . solutions with a tipping over of the light cones due to a rotation about a cylindrically symmetric axis and solutions that violate the energy conditions . all these aspects are analyzed in this review paper . |
the evolutionary relations among subclasses of magnetic and non - magnetic cataclysmic variables ( mcvs and cvs respectively ) have been studied for quite some time .nevertheless , our understanding of these relations remains incomplete , caused largely by serious selection effects which prevent completeness of the samples .here we consider the future evolution of the peculiar mcv ae aqr .its future spin evolution and equilibrium states could clear much of the current confusion , as this binary has been identified as member of a potentially large group of post - thermal timescale mass transfer systems ( schenker et al . , 2002 ) .one consequence of this is the prediction of a relatively large population of mcvs at long orbital periods .such a population has not been detected .we consider the possibility that this could be because these long period mcvs occupy spin equilibria which significantly hamper accretion onto the white dwarf ( wd ) , causing them to have been overlooked or misclassified .we first consider the origin of cvs ( fig . 1 ) .the progenitor of the wd starts off as the more massive star in a relatively wide binary , which has evolved through various stages of ( single ) stellar evolution forming a degenerate core in its centre . during the agb phasethe star swells and fills its roche lobe .a subsequent phase of common envelope evolution leaves the system with the exposed core ( pre - wd ) and the largely unaffected secondary in a close orbit . in order to be a futurecv this system has to become semi - detached .this can be achieved by angular momentum loss ( aml ) shrinking the orbit or nuclear evolution increasing the secondary s radius .the latter channel can lead to a large fraction of systems with a secondary more massive than the wd primary leading to thermal - timescale mass transfer ( ttmt ) .examples of ttmt evolution are the various model tracks for ae aqr presented in schenker et al.(2002 ) .in contrast to standard cvs ( which have low mass donors and stable , aml - driven mass transfer ) this subclass passes through an initial phase of high mass transfer prior to become an aml - driven cv . if we consider cvs which have suffered ttmt , we expect there to be differences to standard cvs at the onset of mass transfer : ( i ) the wds are likely to have large rotation rates and masses .this is a consequence of the high mass transfer rates during the supersoft phase ( during ttmt ) which allows significant accretion onto the wd .( ii ) instead of low - mass ms stars , the donors are the evolved cores of more massive stars .their different internal structure , manifesting itself e.g. in a different mass - radius exponent , leads to smaller at the same , and even to different mass transfer rates .( iii ) the magnetic fields on either or both the donor and the wd may be different ( cf . cumming , 2002 , and this volume , on suppression in rapidly accreting wds ) . the majority of known mcvs ( fig . 2 )are either polars ( mostly below the period gap ) or intermediate polars ( ips , above the gap up to h ) .there are only a few , mostly peculiar systems longwards of h , one of which is ae aqr . if many other systems like it have passed through a similar phase , where do they appear on the period distribution ? this lack of magnetic systems between 6 and 10 h we termed the `` ip gap '' , well aware of the sparse observational data to establish its existence ( cf .recently reheated discussions on evidence for a period gap in mcvs ) .we use a simplified description for the magnetic drag on gas streaming past a spinning wd ( king , 1993 ) , where the magnetic acceleration is proportional to the velocity shear . after splitting off the radial power dependency in the drag and replacing the and flow velocities with the wd rotation and keplerian motionrespectively we obtain different interaction models are reduced to variations in the drag coefficient and the radial power . propelling and accreting states are possible depending on the wd spin , which determines the sign of the velocity shear . extending the usual definition of the magnetospheric radius ( e.g. warner , 1995 ; frank et al . , 2002 )we compare the timescale of the magnetic interaction to the dynamical timescale ( in the disk - less case , i ) and to the viscous timescale ( in the truncated disk case , ii ) .3 shows the resulting critical radii ( shaded for , and an additional set of lines for ) .for there is a critical spin period marking the maximum of the propelling branch in case i. such a maximum exists for all for both cases .additionally the asymptotic radius of the accreting branch in the limit of slow rotation can be written as which corresponds to the radius where the stream becomes threaded in an ordinary polar .both of these expressions describe case i where the mass transfer stream is specified by ( in ) , a stream width ( in ) , and the sound speed at the l1 point ( in ) .the wd mass is ( in ) and its magnetic moment is ( in ) .the corresponding expressions for case ii are and with describing the usual disk viscosity .spin equilibria ips are achieved when torques on the wd cancel .comparing to various other important radii identifies such states ( warner , 1995 ; king & wynn , 1999 ) .we are now able to analyse the various spin equilibria in mcvs provided we have good measurements of the periods ( easy ) , magnetic moments ( difficult ) , masses and mass transfer rates ( even more so ) .results of our most recent simulation are presented in fig .4 . extending previous attempts ( wynn et al . , 1997 ) to model the current status of ae aqr, this calculation follows the spin evolution through the next few . during this timethe spin - down continues at roughly the current rate .we can clearly see that at a truncated disk forms with a large inner hole of .we compare this result in fig . 5 to an analysis using critical radii .in the left - hand panel the situation for ae aqr in its current state is shown : at the observed spin period of 33 sec and a strong propeller ( lower curve , as labelled ) , is large enough to prevent disk formation in the circularization region .as the wd slows down , a disk will form and the effective will switch to the truncated disk case ( pair of curves for hot and cold ) . as long as this happens at ( given by eq .( 4 ) ) , the next stage of spin evolution is a weak propeller in a truncated disk . in the final state of the computation in fig .4 we indeed find . for a stable equilibrium is possible where the propeller slows the wd down during quiescence , while it is spun up by an equal amount during outbursts .this equilibrium can be maintained over secular timescales .such a system will not be visible as a mcv during its rather long quiescent stage between outbursts .we can now construct a sequence of diagrams for ae aqr with parameters taken from the post - ttmt evolution by schenker et al .the right - hand panel of fig .5 ( showing only the truncated disk pair of curves ) is taken at : the curves are shifted upwards along the corotation radius , i.e. the central hole has grown to the point that spin equilibrium is impossible . further slow down of the wd ends the weak propeller phase , and _ for the first time _ae aqr will appear as a normal ip : with a typical ratio and in the range where the bulk of ips are found ( fig . 2 ) .we also identified a potentially stable configuration for a strong ( disk - less ) propeller .the velocity of a stream reaches its maximum value near the point of closest approach to the wd , the point at which the rotational velocity of the magnetic field is lowest .so for given and a critical spin period can be found at which the sign of the drag force between the stream and the rotating wd field changes . as the force depends strongly on radius even a small inversion may be sufficient to balance the angular momentum transferred from the wd to the stream while further away .this may lead to a stable propeller , i.e. a situation where the mass transfer stream is leaving the system yet _ the wd spin does not change_. in order to test this new idea , we have performed sph simulations to investigate the possibility of a stable strong propeller . as can be seen in fig . 6, it is possible to construct a strong propeller at this critical spin state .near the point of closest approach the velocity shear reverses and the stream returns angular momentum to the wd. however , it would appear that this can only happen at spin periods well above found for ae aqr .in summary both the critical radius analysis and numerical work indicate that ae aqr will continue to be a strong propeller for a short time .afterwards it is likely to become a weak propeller system with a truncated disk .at least theoretically , a stable strong propeller is possible , although most probably not feasible for ae aqr . as long as the system can avoid accreting in a stable , ip - like manner , any descendant of ae aqr may easily be overlooked as a mcv .overall we conclude that ttmt evolution forces us to re - evaluate ideas about magnetic binary evolution .the various spin states of magnetic wds can lead to drastically different behaviour of otherwise similar systems .in particular , stable states may exist with or without a disk , propelling or accreting .not all of these states would currently be considered to be magnetic systems on observational grounds . | we investigate the spin evolution of the unusual magnetic cv ae aqr . as a prototype for a potentially large population of cvs subject to a thermally unstable phase of mass transfer , understanding its future is crucial . we present a new definition of the magnetospheric radius in terms of the white dwarf s spin period , and use this along with numerical simulations to follow the spin evolution of ae aqr . we also present preliminary sph results suggesting the existence of a stable propeller state . these results highlight the complexity of mcvs and may provide am improved understanding of the evolution of all types of cvs . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in |
the computing requirements for high energy physics ( hep ) projects like the large hadron collider ( lhc ) at the european laboratory for particle physics ( cern ) in geneva , switzerland are larger than can be met with resources deployed in a single computing center .this has led to the construction of a global distributing computing system known as the worldwide lhc computing grid ( wlcg ) , which brings together resources from nearly 160 computer centers in 35 countries .computing at this scale has been used , for example , by the cms and atlas experiments for the discovery of the higgs boson . to achieve this and other results the cms experiment , for example , typically used during 2012 a processing capacity between 80,000 and 100,000 x86 - 64 cores from the wlcg .further discoveries are possible in the next decade as the lhc moves to its design energy and increases the machine luminosity .however , increases in dataset sizes by 2 - 3 orders of magnitude ( and commensurate processing capacity ) will eventually be required to realize the full potential of this scientific instrument .building the software to run on and operate such a computing system is a major challenge .the distributed nature of the system implies that ownership and control of the resources is also distributed , and thus the resources are by necessity heterogeneous in nature .this heterogeneity appears both in terms of specific x86 hardware generations and in patch levels of the deployed linux operating systems .as these resources are at times shared with other projects custom modifications of systems for hep - specific or experiment - specific reasons are in general not possible .the very large number of cpu hours used also introduces significant reliability requirements on the software .the software itself is non - trivial : each experiment typically is dependent on many millions of lines of code , written for the most part in c++ with contributions from up to a thousand physicists . given the upgrade and evolution plans for the lhc , these software projects , begun in the late 1990 s , will likely need to evolve with computing technology through the 2020 s .such an environment however provides significant opportunities for innovation . in this paperwe examine one interesting technology , _ process checkpoint - restart _ , which has great potential for use in hep workflows in such a system .we first describe the specific use cases of interest and the requirements to make the technology useful in the hep environment .we then provide some benchmarks for the use of checkpoint - restart with the cms software on today s x86 - 64 processors . and finally , we examine aspects of this technology which are of interest given the possible evolution of processor technologies and resource availability in the coming years .we look in particular at the use of checkpointing with architectures like intel s xeon phi , a member of intel s mic ( many integrated core ) architecture .in many circumstances it is desirable to `` checkpoint '' the state of a unix process , or a set of processes , to disk with the possibility of restarting it at a later time .there are a number of interesting use cases for this functionality : * debugging : * the very large number of jobs and cpu hours required for hep computing makes high reliability of the software quite important .the distributed and heterogeneous nature of the computing system however makes debugging problems somewhat difficult , as the first step to resolving most code behavior problems is being able to reproduce the problem . while a traditional `` core '' file may provide information about the process state after a crash has happened , it does nt allow one to step through the program to see the behavior leading to the crash. in the case where the crash happens after a job has run for many hours , reproducing a problem by rerunning it from the beginning can also be quite expensive . if however a job were to checkpoint its state from time to time , it would be possible to use the last checkpointed state before a crash to reproduce and replay the problem quickly .* avoiding cpu - intensive initialization steps : * many applications are constructed such that they have cpu - intensive initialization steps .examples include in - memory geometry construction from a simplified geometry description , physics cross section table calculations , etc .typically this is done to allow generality of software implementation for multiple possible job configurations : it is easier to calculate quantities derived from job configurations on the fly at the beginning of each job than to store those quantities for a possibly infinite number of potential job configurations .hep workflows are however constructed such that a particular job configuration may be used for a very large number of jobs , where the geometry or physics process configuration are the same and only random number seeds or input files change from job to job .the result is that the same calculation is done in every single job instance in a particular workflow . in most cases where jobs run for a long timethis job initialization time is negligible relative to the total running time .however two cases exist where job initialization time can be problematic .first , very short duration jobs can sometimes be required for other software reasons or for operational reasons related to resource availability or input dataset structure . in this casethe overhead from long startup initialization times can be a significant fraction of the cpu utilization .second , as will be described later , there are strong reasons to consider the use of multi - threaded applications in the future . in the case where the startup initialization itself can not be easily parallelized and will be executed sequentially on a single core, the initialization itself may effectively idle a large number of other cores eventually needed for the event processing . in this case , a single instance of the job can be run and checkpointed just after the initialization phase .that checkpoint can then be used to restart a much larger number of instances of the application in batch , with only minor reconfiguration to set input files and/or random number seeds .the instances running in batch can thus avoid the startup cpu cost .* allowing preemption during opportunistic resource use : * from time to time , it is possible to `` opportunistically '' use computing resources which belong to some other organization when the other organization does not have enough work to keep the processors fully utilized for some period of time . in some of these cases the period of time for which the resources are made available may not be well defined in advance and the resources may need to be handed back to their owner in an unscheduled fashion . in this caseit is useful to be able to `` preempt '' running opportunistic jobs , checkpoint their state to disk and restart them when opportunistic use is again possible .* interactive `` workspaces '' : * in interactive programs , such as event displays and analysis tools , the user provides inputs which lead to particular state of the program at a given time . being able to save that state out , for example before going home for the day , and restart later is often desirable . *very long running jobs : * in situations where jobs must run for an extremely long time , sometimes days or weeks , they can be sensitive to hardware or infrastructure failures or interfere with required site maintenance .in these cases it can be quite useful to checkpoint periodically the program to avoid losing and needing to repeat the calculations from scratch after such failures .* managing `` tails '' for multi - threaded applications * : several hep experiments are moving in the direction of multi - threaded frameworks , which ( initially ) process events on different threads . as the cpu time per event can vary significantly ( with long `` tails '' to the distribution ) at the end of the job , one thread may still be processing an event which takes a long time while the other threads / cores are idled .one possibility for managing such situations would be to checkpoint the job with a single active thread and restart a number of such jobs at a later time together , to keep the full set of cpu cores active .a rudimentary `` checkpoint - restart '' can sometimes be achieved in an application - specific fashion .for example a typical hep event processing framework can be constructed to perform a simple `` checkpoint '' by flushing completed output events to its output file after every n input events have been processed .in addition it is necessary to write some sort of `` metadata '' to track any other relevant internal state needed to restart the job , e.g. how far the job had progressed through its input events , the state of random number generators , etc . in this examplea `` restart '' would then be performed by restarting the framework and passing it information to allow it to reconfigure itself to match the state it was in at the time of the output checkpoint .this requires however the addition and maintenance of dedicated code , both in the framework itself and externally in the workflow management system . in some cases , where third party libraries are used which also maintain state , it can be quite complex to truly restore the same state .if the state is encapsulated within the code of the library , for software engineering reasons , it can also be impossible .a much better , and more general , solution is true process - level checkpointing .this is a technology which has existed for a long time , especially in high performance computing ( hpc ) and batch systems , however often the particular implementations are tied to specific environments .thus the technology has not seen general use in hep high throughput distributed computing . in this paperwe examine the use of a transparent , user - level checkpointing package for distributed applications called distributed multithreaded checkpointing ( dmtcp ) .the features of dmtcp make it more appropriate for deployment in the hep distributed computing environment .dmtcp ( distributed multithreaded checkpointing ) is free , open source software ( http://dmtcp.sourceforge.net , lgpl license ) .the dmtcp project traces its roots to late 2004 .a key feature of dmtcp for use in the heterogeneous hep computing environment is that it works in user space , with no kernel - level modifications required . as suchit is works with a wide range of linux kernel versions .it also works with multi - threaded applications and compression of output checkpoint files is possible. its usage can be as simple as : .... dmtcp_launch ./myapp arg1 ... dmtcp_command --checkpoint [ from a second terminal window ] dmtcp_restart ckpt_myapp_*.dmtcp .... dmtcp is also `` contagious '' .if a process begins under dmtcp control , then any child processes will also be under dmtcp control , and any remote processes ( spawned through `` ssh '' ) will also be under dmtcp control . at the time of checkpoint , a script , dmtcp_restart_script.sh , is written , and the script can restart all processes across all nodes for the given computation . the newly released dmtcp version 2.0 ( as of oct. 3 , 2013 ) , supports dmtcp plugins to flexibly adapt to external conditions .for example , the dmtcp plugin interface permits application - initiated checkpoints , as well as application - delayed checkpoints during critical operations .alternatively , the interval flag of dmtp_launch permits automatic periodic checkpointing .dmtcp plugins make it easier to use event hooks to detach such external resources as a database prior to checkpoint , and to reconnect during restart .while dmtcp will save and restore the file offset of open files , event hooks make available an alternative of cleanly closing valuable files during checkpoint , and re - opening them during restart . in another example , dmtcp virtualizes network addresses to enable transparent migration to a new cluster .finally , if a large region of memory is not actively used at the time of checkpoint , then the size of the checkpoint image can be considerably reduced .an event hook allows the application to write zeroes into the inactive memory at checkpoint time , and dmtcp will then replace the zeroes by zero - fill - on - demand pages ( empty pages to be recreated on demand ) . among the contributed plugins for dmtcpis the rm ( resource manager ) plugin , which supports use of dmtcp within the torque and slurm batch queues .plugin support for additional batch queues is planned .similarly , future support for the intel scif network is planned , allowing one to checkpoint a computation over a network of intel mic cpus .the scif plugin will be based on the existing contributed plugin to support checkpointing over infiniband .other contributed plugins support checkpointing a network of virtual machines .virtual machines ease the job of deploying complex software .to investigate the characteristics of dmtcp with hep software , we chose to make tests using the cms simulation application .the machine used to do the test was a dual quad - core intel xeon l5520 operating at 2.27ghz , with 24 gb of memory .tests were done with checkpoint files written to a local disk as well as the normal job output files .the test job had no input event file .however it reads conditions via a web squid from a remote database .dmtcp version 1.2.7 and cmssw version , compiled with gcc version 4.7.2 , were used for the tests . for simplicitythe test job used with checkpointing was the only cpu - intensive process running on the machine at the time the tests were performed .the tests were done using the external dmtcp coordinator trigger to produce a checkpoint , rather than the api .the cms application generated simple minimum bias events and simulated them with geant4 .the particular test job itself has a 2 minute initialization phase and then takes an average time per event of seconds .the memory footprint is gb vsize ( 750 mb rss ) .a typical uncompressed checkpoint takes .5s and the resulting size on disk of the checkpoint file was mb . when triggering checkpointing with the compression on , was required .the checkpoint image was however significantly smaller , at only mb . in both casesno problems were seen in restarting the application from the checkpoint files .the construction of the wlcg was greatly facilitated by the convergence around the year 2000 on commodity x86 hardware and the standardized use of linux as the operating system for scientific computing clusters .even if multiple generations of x86 hardware ( and hardware from both intel and amd ) are provided in the various computer centers , this was a far simpler situation than the previous typical mix of proprietary unix operating systems and processors . until around 2005, a combination of increased instruction level parallelism and ( in particular ) processor clock frequency increases insured that performance gains expected from moore s law would be seen by single sequential applications running on a single processor . the combination of linux , commodity x86 processors and moore s law gains for sequential applications made for a simple software environment .however since around 2005 processors have hit scaling limits , largely driven by overall power consumption .the first large change in commercial processor products as a result of these limits was the introduction of `` multicore '' cpus , with more than one functional processor on a chip .at the same time clock frequencies ceased to increase with each processor generation and indeed were often reduced relative to the peak .the result of this was one could no longer expect that single , sequential applications would run faster on newer processors .however in the first approximation , the individual cores in the multicore cpus appeared more or less like the single standalone processors used previously .most large scientific applications ( hpc / parallel or high throughput ) run in any case on clusters and the additional cores are often simply scheduled as if they were additional nodes in the cluster .this allows overall throughput to continue to scale even if that of a single application does not .it has several disadvantages , though , in that a number of things that would have been roughly constant over subsequent purchasing generations in a given cluster ( with a more or less fixed number of rack slots , say ) now grow with each generation of machines in the computer center .this includes the total memory required in each box , the number of open files and/or database connections , increasing number of independent ( and incoherent ) i / o streams , the number of jobs handled by batch schedulers , etc .the specifics vary from application to application , but potential difficulties in continually scaling these system parameters puts some pressure on applications to make code changes in response , for example by introducing thread - level parallelism where it did not previously exist .there is moreover a more general expectation that the limit of power consumption on future moore s law scaling will lead to more profound changes going forward . in particular ,the power hungry x86 - 64 `` large '' cores of today will likely be replaced by simpler and less power hungry `` small '' cores .one example of such a technology is the intel mic architecture , as implemented in the intel xeon phi coprocessor card .to test the use of checkpointing on the xeon phi , we used a beta version of geant4 version 10 which provides support for event - based multi - threaded applications .we did not use the full cms simulation for this , but instead a simpler benchmark application ( fullcms ) which uses the actual cms geometry imported from gdml file .the experimental setup was a standard intel xeon box with 32 logical cores , equipped with an intel xeon phi 5120p coprocessor card . in our testthe application is started and checkpoints are triggered in different moments . then we have restarted the application from the checkpoint file and we have verified that the application resumes correctly from the saved state .this condition is verified checking that the final output of the simulation equals the original one .a comparison of random number engine status at the end of the job . since in a geant4 applicationthere is a very large use of random numbers ( billions of calls to the engine in typical application ) , if the status of the application from the checkpoint file does not match exactly the original one the sequence of the random number calls will be different , producing a different final state of the random number engine .a typical geant4 application with multi - thread support consists of a sequential part in which the geometry of the experimental setup is built in memory and physics processes are initialized ( e.g. material - dependent cross sections are calculated ) .threads are then spawned , initialized and they start to simulate events independently ( see figure [ fig : geant4-ckpt ] ) . to reduce the total memory footprint the most memory consuming objects are shared between threads .the need for synchronization between threads ( locks , barriers ) is minimized since only read - only objects are shared .we have performed tests to verify the correct behavior of dmtcp for two of the use cases described in section [ sec : usecase ] . in the first casewe have instrumented the geant4 application code with a call to dmtcp to trigger a checkpoint file at the end of the initialization phase ( figure [ fig : ckpt1 ] ) . on the intel xeon phi acceleratorthe initialization takes about 5 minutes .the checkpointing itself takes about 1 minute ( the application working directory , physically located on the host , was mounted by the coprocessor through nfs ) and the resulting checkpoint image file is 1.4 gb ( uncompressed ) . restarting from the checkpoint image file takes less than 10s .functionally this appears to work as expected .the resulting checkpoint image file can thus be distributed to other nodes and the simulation process `` cloned '' without the startup cost , simply by resetting the random number seeds . for the second testwe have emulated the use case of some threads being slower than other in producing results .this can happen if one or more threads is simulating more complex - than - average events .to control such behavior we have modified our application in a way that half of the threads were responsible of simulating simple and fast events ( low energy single particles ) while the second half of threads was responsible for longer ones ( ten times higher energy ) .in current geant4 multi - threaded mode the application will wait for all threads before terminating thus leaving half of the mic cores unused .we have instrumented the application code to verify when the number of active threads drops below a given value , in such a case a checkpoint is triggered ( figure [ fig : ckpt2 ] ) .also in this case we have verified that the application was behaving as expected .it is important to note that we did not have to modify geant4 `` kernel '' code to enable checkpointing , all code modifications were done at the application level . for the first test ( checkpointing at first event ) we have provided feedback to geant4 developers that have introduced a new user - hook , not present in the initial design of the code , that allows for the execution of ( optional ) user code just before the first event is simulated but after all threads have been fully initialized ( this guarantees that the checkpointing is performed in a reproducible state of the application ) .both tests show that checkpointing can be used to increase the efficiency of resource usage also on accelerator technologies where the minimization of the time spent in sequential fractions of the code or with only few threads active is fundamental to efficiently use the hardware resources .we have made investigations into the use of the checkpoint - restart technology dmtcp with hep applications from cms and geant4 .we have reported on the performance seen , both on a traditional x86 - 64 architecture and on intel s xeon phi , for situations relevant for a number of interesting use cases for hep computing .we believe that the results obtained are very encouraging and demonstrate the viability of the use of this technology in the hep environment .this work was partially supported by the national science foundation under grant oci-0960978 ( arya , cooperman ) and cooperative agreement phy-1120138 ( elmer ) as well as by the u.s .department of energy ( dotti ) .9 evans l and bryant p 2008 lhc machine _ jinst _ * 3 * s08001 bird i 2011 computing for the large hadron collider _ annual review of nuclear and particle science _ * 61 * 99 - 118 chatrchyan s et al ( cms collaboration ) 2008 the cms experiment at the cern lhc _ jinst _ * 3 * s08004 aad g et al ( atlas collaboration ) 2008 the atlas experiment at the cern large hadron collider _ jinst _ * 3 * s08003 chatrchyan s et al ( cms collaboration ) 2012 observation of a new boson at a mass of 125 gev with the cms experiment at the lhc _ phys.lett . _* b716 * 30 - 61 aad g et al ( atlas collaboration ) 2012 observation of a new particle in the search for the standard model higgs boson with the atlas detector at the lhc_ phys.lett ._ * b716 * 1 - 29 ansel j , arya k and cooperman g 2009 dmtcp : transparent checkpointing for cluster computations and the desktop _ proc .ieee intl .parallel and distributed processing symposium _( rome ) agostinelli s et al 2003 geant4 - a simulation toolkit _ nuclear instruments and methods in physics research _ * a 506 * 250 - 303 fuller s h and millet l i ( editors ) 2011 _ the future of computing performance : game over or next level ? _ the national academies press . | process checkpoint - restart is a technology with great potential for use in hep workflows . use cases include debugging , reducing the startup time of applications both in offline batch jobs and the high level trigger , permitting job preemption in environments where spare cpu cycles are being used opportunistically and efficient scheduling of a mix of multicore and single - threaded jobs . we report on tests of checkpoint - restart technology using cms software , geant4-mt ( multi - threaded geant4 ) , and the dmtcp ( distributed multithreaded checkpointing ) package . we analyze both single- and multi - threaded applications and test on both standard intel x86 architectures and on intel mic . the tests with multi - threaded applications on intel mic are used to consider scalability and performance . these are considered an indicator of what the future may hold for many - core computing . |
traders or asset managers willing to sell blocks of shares are increasingly using execution algorithms . amongst the strategies proposed by brokers , the most widely studied from an academic point of view is the implementation shortfall ( is ) strategy . the classical modeling framework for optimal liquidation , developed by almgren and chriss in their seminal papers , is indeed focused on is orders benchmarked on the arrival price , that is the price at the beginning of the liquidation process . in the case of is orders , the agent faces a trade - off between selling slowly to reduce execution costs and selling rapidly to limit the influence of price fluctuations .although almost all the literature on optimal liquidation deals with is orders , is algorithms usually account for less volume than vwap ( volume weighted average price ) algorithms see for instance .the aim of traders when they choose vwap orders is to focus on the reduction of execution costs : the order is split into smaller ones and the associated transactions occur on a pre - determined period to obtain a price as close as possible to the average price over this period ( weighted by market volume ) .vwap is also a neutral and rather fair benchmark to evaluate execution processes .many agents are willing to trade as close as possible to the vwap as they are benchmarked on the vwap .+ although vwap orders represent a large part of algorithmic trading , there are only a few papers about vwap orders in the academic literature . the first important paper regarding liquidation with vwap benchmarkwas written by konishi .he developed a simple model and looked for the best static strategy , that is the best trading curve decided upon at the beginning of the liquidation process ( his goal is to minimize the variance of the slippage with respect to the vwap ) .he found that the optimal trading curve for vwap liquidation has the same shape as the relative market volume curve when volatility and market volume are uncorrelated .he also quantified the deviation from the relative market volume curve in the correlated case .the model was then extended by mcculloch and kazakov with the addition of a drift in a more constrained framework .mcculloch and kazakov also developed a dynamic model in which they conditioned the optimal trajectory with perfect knowledge of the volume by the available information at each period of time .bouchard and dang proposed in to use a stochastic target framework to develop vwap algorithms .recently , carmona and li also developed a model for vwap trading in which the trader can explicitly choose between market orders and limit orders to buy / sell shares .related to this literature on vwap trading ( see also ) , an academic literature appeared on market volume models .the papers by bialkowski et al . or mcculloch are instances of such papers modeling market volume dynamics and the intraday seasonality of relative volume .however , all these papers ignore an important point : market impact .the only paper dealing with vwap trading and involving a form of market impact is the interesting paper by frei and westray . in this paper ,the price obtained by the trader is not the market price but a price depending linearly on the desired volume ( as in the early models of almgren and chriss ) .however , there is no permanent market impact in their model , while it plays an important role in our paper .+ the aim of our model is to include permanent market impact ( see for the framework we use ) and any form of execution costs in a model for vwap liquidation .also , our goal is not to obtain a price as close as possible to the vwap but rather to understand how to provide a guaranteed vwap service while mitigating risk . in other words, we want to find the optimal strategy if we are given a certain quantity of shares and asked to deliver the vwap over a predefined period .in addition to the optimal strategy underlying a guaranteed vwap contract , we are interested in the price of such a contract . for that purpose ,we use indifference pricing in a cara framework , as in where the author prices a large block of shares .this price for a guaranteed vwap contract is the minimum premium the trader needs to pay to the broker so that the latter accepts to deliver the vwap to the former .it depends on the size of the order , on liquidity and market conditions , and on the risk aversion of the broker .+ in section 2 , we present the general framework of our model .we introduce the definition of the vwap and the forms of market impact used in the model . the optimization criterion is defined and the price / premium of a guaranteed vwap contract is defined using indifference pricing . in section 3, we characterize the optimal liquidation strategy when the market volume curve is assumed to be known ( deterministic case ) , along with the premium of the guaranteed vwap contract . in section 4, we focus on special cases and numerics .we show that , in the absence of permanent market impact , the optimal trading curve has the same shape as the market volume curve .we consider also the case of linear permanent market impact and quadratic execution costs that permits to get closed form solutions and to better understand the role played by permanent market impact .finally , an efficient numerical method is provided to approximate the solution in the general case . in the last section ,we extend our model to the case of stochastic volumes and we characterize the price of a guaranteed vwap with a pde . +we consider a filtered probability space corresponding to the available information on the market , namely the market price and market volume of a stock up to the observation time . for ,we denote the set of -valued progressively measurable processes on [ 0,t ] . + we consider a trader who wants to sell shares over the time period ] , i.e. : the above definition of the vwap can also be formulated as : where .this formulation is often used in the literature but it has an important drawback as the above integral is not -adapted .its natural filtration is indeed where : in the above definition of , we did not include our own volume .an alternative definition , including our trades , is we shall see in the appendix of this article that the results we obtain with our simpler definition can be easily modified to be true in the case of the alternative definition .the rationale for our optimization criterion is the following .we consider a stock trader or an asset manager who wants to sell shares at the vwap over the period ] , minus a premium hereafter denoted to compensate the intermediary for the service and the associated costs , this premium being agreed upon at time .the problem we address is the problem of the intermediary : he receives shares and sells them over the period ] such that and and we define the application by : we shall denote for short . + in order to prove the existence of an optimal strategy , we first show the technical lemma : [ lemme technique existence ] verifies the three following assertions : * is convex with respect to the third variable ; * there exists and such that as and , q\in \r , \ell(t , q , v ) \geq\theta(\left| v\right| ) -c_0;\ ] ] * for all there exists such that for all ] , , where is a modulus of continuity .* proof : * + ( i ) follows from the convexity of and the positivity of .+ for ( ii ) , we first see that for any ] : for ( iii ) , we have for all \times \r\times \r \times \r ] .* proof : * + we divide the proof in three steps .+ _ step 1 : _ we first show that any strategy can be improved by considering a new one taking values in ] : eventually , we have that and then : with strict inequality whenever . + _ step 2 : _ we then show the uniqueness of the minimizer .let us consider and two minimizers of such that . we know, using step 1 , that and take values in ] and strict convexity of , we have , which contradicts the optimality of and .+ _ step 3 : _ the existence of the solution follows from theorem 6.1.2 in , where we show in lemma [ lemme technique existence ] that verifies the three required conditions . we shall characterize as the solution of the hamiltonian system associated to the optimization problem . + the function is not convex and so the classical results of and can not be applied . sincethe optimal solution must verify , we modify the problem and introduce .we then define the associated function on \times \r \times \r ] : * proof * : + the hamiltonian of the system is : the characterization of given in the theorem 6 of and its corollary is : where stands for the subdifferential . + given the expression for , only may not be a singleton made of a real number , when .if is finite , then the expression given in the proposition is obtained by straightforward computation . if , then and the expression in the proposition is correct , giving whenever . this hamiltonian characterization allows to get a regularity result for the optimal strategy if is continuous then ) ] . + the above remarks only apply in the absence of permanent market impact but the assumption of a deterministic market volume is acceptable if we deviate slightly through the introduction of a small permanent market impact .now , our goal is to understand the influence of permanent market impact in this framework , and the nature of the related deviation of the optimal strategy from the relative market volume curve .+ we now explore , for flat volume curves , the particular case where execution costs are quadratic , i.e. , and where permanent market impact is linear , i.e. , as in the initial almgren - chriss framework .[ ac ] assume , and .we have : and where .\ ] ] * proof : * + we first see that .then , using proposition [ prop : deterministic hamiltonian ] , we obtain that : this leads to : therefore , the optimal strategy if of the form : the initial and terminal conditions imply that : we obtain then : since , we obtain : .\ ] ] coming to the premium , we have : this result permits to understand the role played by permanent market impact .we indeed have that and therefore that the liquidation must occur more rapidly with the addition of permanent market impact .the rationale underlying this point is that the intermediary is going to pay to the client and therefore he has an incentive to sell rapidly so that the price moves down , resulting in a lower vwap .if is large , the optimal strategy may even be to oversell before buying back the shares so as to reduce the value of the vwap ( see below ) .+ coming to the premium for a guaranteed vwap , it is straightforward to see that if the premium would be equal to .therefore , the reduction in the premium due to the use of the optimal strategy is given by + in particular , in the limiting case , corresponding to a risk neutral agent , we obtain the following straightforward formulas for the optimal strategy and the premium : several examples of optimal vwap liquidation are given on figure 1 . we see that taking permanent market impact into account is important since the optimal trading curve may be really different from the simple trading curve obtained in proposition [ noperm ] .+ , , , , , , day .plain line : .dot - dashed line : .the dashed line corresponds to .,scaledwidth=95.0% ] it is important to understand what is at play here .an agent willing to sell shares at a price close to the vwap over a given period has usually two possibilities .he may call his favorite broker and ask for an agency vwap order . in that case , the broker will try to sell shares as close as possible to the vwap and the price obtained will be the price for the agent .in other words , the risk is borne by the agent .the other possibility is to enter a guaranteed vwap contract . in that case, the price obtained by the agent will always be the vwap .our point is that the vwap obtained by the agent in a guaranteed vwap contract is not the same as the vwap obtained on average through agency trades . in a guaranteed vwap contract ,the counterpart has indeed an incentive to sell more rapidly in order to push down the price and hence push down the vwap .this is not market price manipulation , as the overall impact of the execution process would be the same independently of the trajectory .this is however a form of vwap manipulation .is the agent harmed ? somehow yes , although he gets its benchmark price . nonetheless ,since the counterpart of the contract makes money by selling more rapidly at the beginning of the execution process , he can redistribute part of it through a reduction of the premium ... + to measure the difference between the naive strategy and the optimal strategy , the best indicator is the premium of a guaranteed vwap contract .we considered the same cases as on figure 1 , that is , , , , , , day , and two scenarios for the risk aversion parameter .the results on figure 1 state that if the naive strategy was used , the minimum price of a guaranteed vwap contract would be bps .however , when optimal strategies are used in the above examples , the intermediary would accept the contract without the payment of a premium , as the theoretical value of the premia are in fact negative ..premium of a guaranteed vwap contract in the case of a naive strategy and in the case of the optimal strategy .[ cols="^,^,^",options="header " , ] we treated above special cases for which closed form formulas could be obtained . in general , this is not the case and we present here a general method to approximate the solution of the hamiltonian system .it is important to notice that the use of the hamiltonian system is preferable to the use of euler - lagrange equation when it comes to numerics since the problem remains of order 1 .the method we use to approximate the solution of the hamiltonian system on the grid is to apply a newton method on the following nonlinear system of equations : to be more precise , we consider a first couple where : typically , we consider given by .+ then , to go from to we consider the following method : where solves the linear system : two examples of the use of this method are shown on figure 2 and figure 3 .the first one corresponds to with and linear permanent market impact .the second one corresponds to with and nonlinear permanent market impact of the form with . , , , , with and , , day .plain line : .the dashed line corresponds to .,scaledwidth=75.0% ] , , , , with and , with and , day .plain line : .the dashed line corresponds to .,scaledwidth=75.0% ]in the previous sections of this article , we focused on the case of a deterministic volume curve .the reason for this is twofold .firstly , it permits to understand the role played by permanent market impact in a tractable case .secondly , it corresponds to the way vwap strategies are often built in practice and we explained above why it was a good approximation . practitioners usually compute relative market volume curves based on historical data and try to follow this curve to get a price as close as possible to the vwap .+ we now briefly explore the case of stochastic market volume .the aim of this section is to provide a hamilton - jacobi - bellman pde to characterize the optimal liquidation strategy ( that is no longer deterministic , and hence no longer a trading curve decided upon in advance , at time ) and the premium of a guaranteed vwap contract .a similar approach , using stochastic optimal control , was adopted by frei and westray in the mean - variance setup . however , in their paper , the initial filtration is augmented with the knowledge of the final volume and this makes their approach questionable for practical use .+ in the model we consider , the instantaneous market volume is modeled by a simple stochastic process but it can be generalized to other processes .in fact , our main goal is to write the hamilton - jacobi - bellman pde characterizing the solution of our problem using as few variables as possible .if one considers indeed our problem with stochastic market volume in its initial form , 7 variables are necessary to describe the problem : the time , the trader s inventory , the asset market price , the cash account , the instantaneous market volume , the cumulated market volume , and a variable linked to the vwap , namely . using two changes of variables ,we manage to restrict the number of variables to 5 .classical numerical ( pde ) methods may fail to approximate the solution of the pde , but some probabilistic methods may be efficient ( see for robust methods in the case of problems in high dimension ) .+ coming to the model , we consider that the instantaneous volume is given by , where is , where is a brownian motion independent of , and where .the dynamics of is then given by the stochastic differential equation : represents obviously the instantaneous market volume , on average , at time . in europe , it is a w - shaped curve with a peak corresponding to the opening of the us market .+ other dynamics can be considered .the goal of this last section is not to consider the best possible model for volumes but rather to show how the complexity of the model can be reduced through changes of variables .+ in order to consider a non - degenerated problem ( one may alternatively use the stochastic target framework ) , we consider a slightly modified problem where is not forced to be equal to . rather than imposing , we consider that , at time , the remaining stocks are not liquidated at price but rather at price , where is chosen positive and high enough to discourage the trader to keep a large position at time .+ in this slightly modified framework , with computations similar to those of section [ subsect : some easy calculations ] , we obtain the following : \\ & + \int_0^{q_t}f(z)dz - q_t f(q_0-q_t)-k q^2_t.\end{aligned}\ ] ] mathematically , it corresponds to a penalization at time of the form , and we suppose that . + now , we define the set of admissible strategies for all ] , we define : .\end{aligned}\ ] ] this is the dynamic problem associated to and we observe that corresponds to . we then have the following : is a viscosity solution of : where the operator is defined by : * proof : * + it is straightforward to see that the value function is locally bounded .the result is then classically obtained by stochastic control technics .the required dynamic programming principle can indeed be deduced from the apparatus developed in , and the viscosity subsolution and supersolution are obtained using . it is noteworthy that we managed to remove the price from the state variables .we now remove , using a change of variables . for that purpose ,we introduce .then we have : is a viscosity solution of : \times \r^3\times \r_+ \times \r_+^{*}. \\ \tilde{u}(t , y , q , q , v ) = -\exp\left(-\gamma \left ( \frac{y}{q}-h(q ) \right ) \right ) .\end{array}\right.\ ] ] with defined by : * proof : * + the change of variables being monotonically increasing , the result is obtained by straightforward computation . now , we are going to prove a technical lemma in order to show that ( or equivalently ) is always negative .for any \times \r^3\times \r_+ \times \r_+^ { * } ] and .we have by jensen s inequality : \le -\exp\left(-\gamma \mathcal{e}^v \right),\ ] ] where \\ & = \e \left[x + \frac{y}{q_t } + \sigma q_0 \int_t^t \left ( \frac{q_s}{q_0}-\frac{q_t - q_s}{q_t } \right)dw_s\right.\\ & + \left.\int_t^t \left(-v_s l \left ( \frac{v_s}{v_s } \right ) + \frac{v_s}{q_t } q_0f(q_0-q_s ) \right ) ds - h(q_t)\right].\\\end{aligned}\ ] ] since , we have =0 ] . therefore : + \e \left[\int_t^t \left(-v_s l \left ( \frac{v_s}{v_s } \right ) + \frac{v_s}{q_t } q_0 f(q_0-q_s ) \right ) ds - h(q_t)\right].\ ] ] since is nonincreasing on , there exists a constant such that : now , since is superlinear , there exists such that : this gives : + \e \left[\int_t^t \left(b - cq_0 |v_s| + \frac{v_s}{q_t } cq_0 \left(1 + ( q_0-q_s)_+\right ) \right ) ds - h(q_t)\right]\\ & \le x + \e \left[\frac{y}{q_t}\right ] + bt + cq_0 + \e \left[\int_t^t \left(- cq_0 |v_s| + \frac{v_s}{q_t } cq_0 ( q_0-q_s)_+ \right ) ds - h(q_t)\right]\\ & \le x + \e \left[\frac{y}{q_t}\right ] + bt + cq_0 + \e \left[cq_0 \left((q_0-q_t)_+ - \frac{q}{q_t } ( q_0-q)_+ \right ) - h(q_t)\right]\\ & + \e \left[\int_t^t \left(- cq_0 |v_s| - c q_0 \frac{q_s}{q_t } v_s \right ) ds\right]\\ & \le x + bt + cq_0 + \e \left[\frac{y}{q_t}\right ] + \e \left [ cq_0 ( q_0-q_t)_+ - h(q_t ) \right].\\\end{aligned}\ ] ] since , we get that < + \infty ] exists ( and is independent of ) .putting these inequalities altogether , we get : therefore , now , since is never equal to , we can consider the change of variables .this new change of variables does not remove another variable but it has two related advantages .firstly , is in the same unit as the cash account .hence , it takes values in a range that can be evaluated in advance .this is particularly important when it comes to numerics .secondly , the premium for a guaranteed vwap contract is straightforwardly : easy computations lead to the fact that is a viscosity solution of : where is the legendre transform of , and where the nonlinear operator is defined by : \\ & + v \partial_q + v \frac{g'(t)}{g(t ) } \partial_v + \frac{\alpha^2 v^2}{2 } \left [ \gamma \left ( \partial_v \right)^2 + \partial_{vv}\right].\end{aligned}\ ] ]in this article we built a model to find the optimal strategy to liquidate a portfolio in the case of a guaranteed vwap contract .when there is permanent market impact , we showed that the best strategy is not to replicate the vwap but rather to sell more rapidly to push down the vwap .also , we use the indifference pricing approach to give a price to guaranteed vwap contracts and we showed that taking into account permanent market impact permits , at least theoretically , to reduce substantially the price of guaranteed vwap contracts . finally , in the case of stochastic volumes , we developed a new model with only 5 variables and not 7 variables as in a naive approach . +in section 2 , we briefly discussed two alternative definitions of the vwap over ] , and we have equivalence between the two following assertions : this result is straightforward with the use of theorem [ existence lambda ] . to compute numerically , we need to compute the values of the function can be done through a numerical approximation of using the same numerical methods as in section 4 .r. almgren and n. chriss .value under liquidation .risk , 12(12):61 - 63 , 1999 .r. almgren and n. chriss . optimal execution of portfolio transactions .journal of risk , 3:5 - 40 , 2001 .asx , algorithmic trading and market access arrangements , february 2010 .j. bialkowski , s. darolles and g. le fol , improving vwap strategies : a dynamic volume approach , journal of banking and finance , elsevier , vol .32(9 ) , pages 1709 - 1722 , september 2008 .j. bialkowski , s. darolles and g. le fol , how to reduce the risk of executing vwap orders ? - new approach to modeling intraday volume , 2006 preprint b. bouchard , and n.m .dang , generalized stochastic target problems for pricing and partial hedging under loss constraints - application in optimal book liquidation , finance and stochastics , 2013 , 17(1 ) , 31 - 72 .b. bouchard and n. touzi , weak dynamic programming principle for viscosity solutions ._ siam journal on control and optimization _, 49 , 3 , 948 - 962 , 2011 .p. cannarsa and c. sinestrari .semiconcave functions , hamilton - jacobi equations , and optimal control , volume 58 .birkhauser boston , 2004 .r. carmona .indifference pricing : theory and applications .princeton university press , 2009 .r. carmona and h. li , dynamic programming and trade execution , ( li . phd thesis ) 2013 .fleming , and h.m .soner , controlled markov processes and viscosity solutions , 2nd edition , springer - verlag , 2005 . c. frei , and n. westray , optimal execution of a vwap order : a stochastic control approach , 2013 to appear in mathematical finance .e. gobet , j .-lemor , and x. warin , a regression - based monte carlo method to solve backward stochastic differential equations .15 2172 - 2202 o. gueant , execution and block trade pricing with optimal constant rate of participation , 2012 preprint o. gueant . optimal execution and block trade pricing , a general framework , 2012 preprint .o. gueant .permanent market impact can be nonlinear , 2013 preprint .m. humphery - jenner , optimal vwap trading under noisy conditions , journal of banking and finance , vol .35 , no . 9 , 2011 .kakade , m. kearns , y. mansour and l. e. ortiz , competitive algorithms for vwap and limit order trading , proceedings of the acm electronic commerce conference , 2004 .h. konishi , optimal slice of a vwap trade , journal of financial markets , volume 5 , issue 2 , april 2002 , pages 197 - 221 c .- a .lehalle , s. laruelle , market microstructure in practice , world scientific , 2014 j. mcculloch , relative volume as a doubly stochastic binomial point process , no 146 , research paper series , 2005 , quantitative finance research centre , university of technology , sydney .j. mcculloch and v. kazakov , optimal vwap trading strategy and relative volume , no 201 , research paper series , 2007 , quantitative finance research centre , university of technology , sydney .j. mcculloch and v. kazakov , mean variance optimal vwap trading , 2012 preprint .d. possamai , g. royer and n. touzi , on the robust superhedging of measurable claims , 2013 preprint .rockafellar , conjugate convex functions in optimal control and the calculus of variations , j. math .analysis appl .32 ( 1970 ) , 174 - 222 .generalized hamiltonian equations for convex problems of lagrange , pacific j. math .33 ( 1970 ) , 411 - 427 a. schied , t. schoneborn , and m. tehranchi .optimal basket liquidation for cara investors is deterministic .applied mathematical finance , 17(6):471 - 489 , 2010 .m. schweizer , approximation pricing and the variance - optimal martingale measure , annals of probability 24 , 206 - 236 , 1995 .a. wranik , a trading system for flexible vwap executions as a design artefact , 2009 , 13th pacific asia conference on information systems ( pacis ) , hyderabad , india . | optimal liquidation using vwap strategies has been considered in the literature , though never in the presence of permanent market impact and only rarely with execution costs . moreover , only vwap strategies have been studied and the pricing of guaranteed vwap contracts has never been addressed . in this article , we develop a model to price guaranteed vwap contracts in a general framework for market impact and we highlight the differences between an agency vwap and a guaranteed vwap contract . numerical methods and applications are also provided . * key words : * optimal liquidation , vwap strategy , guaranteed vwap contract , optimal control , indifference pricing |
the hankel transform of order of a function \rightarrow\mathbb{c} ] for .as expected , the asymptotic expansion becomes more accurate as increases .in particular , for sufficiently large the error term is negligible , i.e. , .it is important to appreciate that is an asymptotic expansion , as opposed to a convergent series , and does not converge pointwise to as . in practice , increasing will eventually be detrimental and lead to severe numerical overflow issues .figure [ fig : asydivergence ] ( right ) shows the error as and .thus , the appropriate use of is to select an and ask : `` for what sufficiently large is the asymptotic expansion accurate ? '' .for example , from figure [ fig : asydivergence ] ( left ) we observe that for we have for any and hence , we may safely replace by a weighted sum of and for when . besselasymptotics ( 50,0 ) ( -1.5,27 ) ( 75,51.5 ) ( 74,41 ) ( 73,32 ) ( 71,25.2 ) ( 68,19 ) ( 44,20 ) ( 35.2,22.2 ) fixedz ( 50,0 ) ( -1.5,27 ) more generally , it is known that the error term is bounded by the size of the first neglected terms ) .since and we have this is essentially a sharp bound since for any there is a such that is larger than the magnitude of the first neglected terms . of the form for sufficiently large . ]let be the smallest real number so that if then . from we can take as the number that solves in general , the equation above has no closed - form solution , but can be solved numerically by a fixed - point iteration .we start with the initial guess of and iterate as follows : we terminate this after four iterations and take .table [ tab : smtable ] gives the calculated values of for and .cccccccccccc ' '' '' ' '' '' & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 + ' '' '' ' '' '' & 180.5 & 70.5 & 41.5 & 30.0 & 24.3 & 21.1 & 19.1 & 17.8 & 17.0 & 16.5 + & 185.2 & 71.5 & 41.9 & 30.2 & 24.4 & 21.1 & 19.2 & 17.9 & 17.1 & 16.5 + & 200.2 & 74.8 & 43.1 & 30.8 & 24.8 & 21.4 & 19.3 & 18.0 & 17.2 & 16.6 + & 2330.7 & 500.0 & 149.0 & 64.6 & 41.4 & 31.4 & 26.0 & 22.9 & 20.9 & 19.6 + for an integer , the taylor series expansion of about is given by ( * ? ? ?* ( 10.2.2 ) ) in contrast to , the taylor series expansion converges pointwise to for any .however , for practical application the infinite series in must still be truncated .let be an integer and consider the truncated taylor series expansion that is given by where is the error term . as the leading asymptotic behavior of matches the order of the first neglected term and hence , .in particular , there is a real number so that if then . to calculate the parameter we solve the equation .remarkably , an explicit closed - form expression for is known and is given by where is the generalized hypergeometric function that we approximate by since we are considering to be small ( * ? ? ?* ( 16.2.1 ) ) . solving find that table [ tab : tttable ] gives the calculated values of for and .ccccccccccc ' '' '' ' '' '' & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 + ' '' '' ' '' '' & & 0.001 & 0.011 & 0.059 & 0.165 & 0.337 & 0.573 & 0.869 & 1.217 + & & 0.003 & 0.029 & 0.104 & 0.243 & 0.449 & 0.716 & 1.039 & 1.411 + & 0.001 & 0.012 & 0.061 & 0.168 & 0.341 & 0.579 & 0.876 & 1.225 & 1.618 + & 0.484 & 0.743 & 1.058 & 1.420 & 1.823 & 2.262 & 2.733 & 3.230 & 3.750 + the neumann addition formula expresses as an infinite sum of products of bessel functions of the form for .it is given by ( * ? ? ?* ( 10.23.2 ) ) here , we wish to truncate and use it as a numerical approximation to .fortunately , when is small ( so that can be considered as a perturbation of ) the neumann addition formula is a rapidly converging series for .let be an integer , , and .then , for we have where is euler s number .[ lem : neumannbound ] let be an integer , , , and . denote the left - hand side of the inequality in by so that since ( * ? ? ?* ( 10.14.1 ) ) and ( * ? ? ?* ( 10.4.1 ) ) , we have by kapteyn s inequality ( * ? ? ?* ( 10.14.8 ) ) we can bound from above as follows : where the last inequality comes from for .therefore , by substituting into we find that where we used and .lemma [ lem : neumannbound ] shows that approximates , up to an error of , provided that .equivalently , for any table [ tab : neumanntable ] gives the values of for .ccccccccccc ' '' '' ' '' '' & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 + ' '' '' ' '' '' & & & 0.001 & 0.002 & 0.004 & 0.008 & 0.013 & 0.020 + at first it may not be clear why lemma [ lem : neumannbound ] is useful for practical computations since a single evaluation of is replaced by a seemingly trickier sum of products of bessel functions .however , in sections [ sec : fourierbessel ] and [ sec : discretehankeltransform ] the truncated neumann addition formula will be the approximation that will allow us to evaluate fourier bessel expansions and compute the discrete hankel transform .the crucial observation is that the positive roots of can be regarded as a perturbed equally - spaced grid .let denote the positive root of .we know from that the leading asymptotic behavior of is for large .therefore , since for the leading asymptotic behavior of is for large . similarly , the leading asymptotic behavior of is for large and .the next lemma shows that and the ratios can be regarded as perturbed equally - spaced grids .let denote the positive root of .then , and [ lem : besselinequalities ] the bounds on are given in ( * ? ? ?3 ) . for the bound on have ( since and ) the result follows from and .lemma [ lem : besselinequalities ] shows that we can consider as a small perturbation of and that we can consider as a small perturbation of .this is an important observation for sections [ sec : fourierbessel ] and [ sec : discretehankeltransform ] .a schlmilch expansion of a function \rightarrow \mathbb{c} ] , , and .similarly , for sine we have thus , by substituting and into with we obtain a matrix decomposition for , where we used the fact that is a rank matrix .we can write more conveniently as one could now compute by first computing the vector and then correcting it by .the matrix - vector product can be computed in operations since the expression in decomposes as a weighted sum of diagonal matrices and dcts and dsts . while this does lead to an evaluation scheme for schlmilch expansions , it is numerically unstable because the asymptotic expansion is employed for entries for which is small ( see section [ sec : asymptoticexpansion ] ) .numerical overflow issues and severe cancellation errors plague this approach . figure [ fig : partition ] illustrates the entries that cause the most numerical problems .we must be more careful and only employ the asymptotic expansion for the entry of if .( 0,0 ) ; ( 5,0 ) ( 8,0 ) ( 8,3 ) ( 5,0 ) ; at ( .8,1.5 ) ( l1 ) ; at ( 5.6,1.5 ) ( l1 ) ; at ( 9.6,1.5 ) ( l2 ) ; at ( 8.5,1.7 ) ( l2 ) ; at ( 4,1.7 ) ( l2 ) ; ( 9,0)(0.200 * 3/5 + 9,0.000 * 3/5)(0.216 * 3/5 + 9,0.366 * 3/5)(0.221 * 3/5 + 9,0.476 * 3/5)(0.226 * 3/5 + 9,0.581 * 3/5)(0.232 * 3/5 + 9,0.682 * 3/5)(0.237 * 3/5 + 9,0.778 * 3/5)(0.242 * 3/5 + 9,0.870 * 3/5)(0.247 * 3/5 + 9,0.957 * 3/5)(0.253 * 3/5 + 9,1.042 * 3/5)(0.258 * 3/5 + 9,1.122 * 3/5)(0.263 * 3/5 + 9,1.200 * 3/5)(0.268 * 3/5 + 9,1.275 * 3/5)(0.274 * 3/5 + 9,1.346 * 3/5)(0.279 * 3/5 + 9,1.415 * 3/5)(0.284 * 3/5 + 9,1.481 * 3/5)(0.289 * 3/5 + 9,1.545 * 3/5)(0.295 * 3/5 + 9,1.607 * 3/5)(0.300 * 3/5 + 9,1.667 * 3/5 ) (0.385 * 3/5 + 9,2.400 * 3/5)(0.513 * 3/5 + 9,3.050 * 3/5)(0.641 * 3/5 + 9,3.440 * 3/5)(0.769 * 3/5 + 9,3.700 * 3/5)(0.897 * 3/5 + 9,3.886 * 3/5)(1.026 * 3/5 + 9,4.025 * 3/5)(1.154 * 3/5 + 9,4.133 * 3/5)(1.282 * 3/5 + 9,4.220 * 3/5)(1.410 * 3/5 + 9,4.291 * 3/5)(1.538 * 3/5 + 9,4.350 * 3/5)(1.667 * 3/5 + 9,4.400 * 3/5)(1.795 * 3/5 + 9,4.443 * 3/5)(1.923 * 3/5 + 9,4.480 * 3/5)(2.051 * 3/5 + 9,4.513 * 3/5)(2.179 * 3/5 + 9,4.541 * 3/5)(2.308 * 3/5 + 9,4.567 * 3/5)(2.436 * 3/5 + 9,4.589 * 3/5)(2.564 * 3/5 + 9,4.610 * 3/5)(2.692 * 3/5 + 9,4.629 * 3/5)(2.821* 3/5 + 9,4.645 * 3/5)(2.949 * 3/5 + 9,4.661 * 3/5)(3.077 * 3/5 + 9,4.675 * 3/5)(3.205 * 3/5 + 9,4.688 * 3/5)(3.333 * 3/5 + 9,4.700 * 3/5)(3.462 * 3/5 + 9,4.711 * 3/5)(3.590 * 3/5 + 9,4.721 * 3/5)(3.718 * 3/5 + 9,4.731 * 3/5)(3.846 * 3/5 + 9,4.740 * 3/5)(3.974 * 3/5 + 9,4.748 * 3/5)(4.103 * 3/5 + 9,4.756 * 3/5)(4.231 * 3/5 + 9,4.764 * 3/5)(4.359 *3/5 + 9,4.771 * 3/5)(4.487 * 3/5 + 9,4.777 * 3/5)(4.615 * 3/5 + 9,4.783 * 3/5)(4.744 * 3/5 + 9,4.789 * 3/5)(4.872 * 3/5 + 9,4.795 * 3/5)(5.000 * 3/5 + 9,4.800 * 3/5)(12,3)(9,3)(9,0 ) ; ( 0.200 * 3/5 + 9,0.000 * 3/5)(0.216 * 3/5 + 9,0.366 * 3/5)(0.221 * 3/5 + 9,0.476 * 3/5)(0.226 * 3/5 + 9,0.581 * 3/5)(0.232 * 3/5 + 9,0.682 * 3/5)(0.237* 3/5 + 9,0.778 * 3/5)(0.242 * 3/5 + 9,0.870 * 3/5)(0.247 * 3/5 + 9,0.957 * 3/5)(0.253 * 3/5 + 9,1.042 * 3/5)(0.258 * 3/5 + 9,1.122 * 3/5)(0.263 * 3/5 + 9,1.200 * 3/5)(0.268 * 3/5 + 9,1.275 * 3/5)(0.274 * 3/5 + 9,1.346 * 3/5)(0.279* 3/5 + 9,1.415 * 3/5)(0.284 * 3/5 + 9,1.481 * 3/5)(0.289 * 3/5 + 9,1.545 * 3/5)(0.295 * 3/5 + 9,1.607 * 3/5)(0.300 * 3/5 + 9,1.667 * 3/5 ) (0.385 * 3/5 + 9,2.400 * 3/5)(0.513 * 3/5 + 9,3.050 * 3/5)(0.641 * 3/5 + 9,3.440 * 3/5)(0.769 * 3/5 + 9,3.700 * 3/5)(0.897 * 3/5 + 9,3.886 * 3/5)(1.026 * 3/5 + 9,4.025 * 3/5)(1.154 * 3/5 + 9,4.133 * 3/5)(1.282 * 3/5 + 9,4.220 * 3/5)(1.410 * 3/5 + 9,4.291 * 3/5)(1.538 * 3/5 + 9,4.350 * 3/5)(1.667 * 3/5 + 9,4.400 * 3/5)(1.795 * 3/5 + 9,4.443 * 3/5)(1.923 * 3/5 + 9,4.480 * 3/5)(2.051 * 3/5 + 9,4.513 * 3/5)(2.179 * 3/5 + 9,4.541 * 3/5)(2.308 * 3/5 + 9,4.567 * 3/5)(2.436 * 3/5 + 9,4.589 * 3/5)(2.564 * 3/5 + 9,4.610 * 3/5)(2.692* 3/5 + 9,4.629 * 3/5)(2.821 * 3/5 + 9,4.645 * 3/5)(2.949 * 3/5 + 9,4.661 * 3/5)(3.077 * 3/5 + 9,4.675 * 3/5)(3.205 * 3/5 + 9,4.688 * 3/5)(3.333* 3/5 + 9,4.700 * 3/5)(3.462 * 3/5 + 9,4.711 * 3/5)(3.590 * 3/5 + 9,4.721 * 3/5)(3.718 * 3/5 + 9,4.731 * 3/5)(3.846 * 3/5 + 9,4.740 * 3/5)(3.974 * 3/5 + 9,4.748 * 3/5)(4.103 * 3/5 + 9,4.756 * 3/5)(4.231 * 3/5 + 9,4.764 * 3/5)(4.359 * 3/5 + 9,4.771 * 3/5)(4.487 * 3/5 + 9,4.777 * 3/5)(4.615 * 3/5 + 9,4.783 * 3/5)(4.744 * 3/5 + 9,4.789 * 3/5)(4.872 * 3/5 + 9,4.795 * 3/5)(5.000 * 3/5 + 9,4.800 * 3/5 ) ; ( 5,3 * 3/5+.3)(0.5 * 3/5 + 4.7,3 * 3/5+.3)(0.513 * 3/5 + 4.7,3.050 * 3/5+.3)(0.641 * 3/5 + 4.7,3.440 * 3/5+.3)(0.769 * 3/5 + 4.7,3.700 * 3/5+.3)(0.897 * 3/5 + 4.7,3.886 * 3/5+.3)(1.026 * 3/5 + 4.7,4.025 * 3/5+.3)(1.154 * 3/5 + 4.7,4.133 * 3/5+.3)(1.282* 3/5 + 4.7,4.220 * 3/5+.3)(1.410 * 3/5 + 4.7,4.291 * 3/5+.3)(1.538* 3/5 + 4.7,4.350 * 3/5+.3)(1.667 * 3/5 + 4.7,4.400 * 3/5+.3)(1.795 * 3/5 + 4.7,4.443 * 3/5+.3)(1.923 * 3/5 + 4.7,4.480 * 3/5+.3)(2 * 3/5 + 4.7,4.5 * 3/5+.3)(2 * 3/5 + 4.7,3)(5,3)(5,3 * 3/5+.3 ) ; ( 5 + 4,3 * 3/5+.3)(0.5 * 3/5 + 4.7 + 4,3 * 3/5+.3)(0.513 * 3/5 + 4.7 + 4,3.050 * 3/5+.3)(0.641 * 3/5 + 4.7 + 4,3.440 * 3/5+.3)(0.769 * 3/5 + 4.7 + 4,3.700 * 3/5+.3)(0.897 * 3/5 + 4.7 + 4,3.886 * 3/5+.3)(1.026 * 3/5 + 4.7 + 4,4.025 * 3/5+.3)(1.154 * 3/5 + 4.7 + 4,4.133 * 3/5+.3)(1.282 * 3/5 + 4.7 + 4,4.220 * 3/5+.3)(1.410 * 3/5 + 4.7 + 4,4.291 * 3/5+.3)(1.538 * 3/5 + 4.7 + 4,4.350 * 3/5+.3)(1.667 * 3/5 + 4.7 + 4,4.400 * 3/5+.3)(1.795 * 3/5 + 4.7 + 4,4.443 * 3/5+.3)(1.923 * 3/5 + 4.7 + 4,4.480 * 3/5+.3)(2 * 3/5 + 4.7 + 4,4.5 * 3/5+.3)(2 * 3/5 + 4.7 + 4,3)(5 + 4,3)(5 + 4,3 * 3/5+.3 ) ; ( 9,0 ) ( 12,0 ) ( 12,3 ) ( 9,3 ) ( 9,0 ) ; by section [ sec : asymptoticexpansion ] we know that if then .therefore , we can use for the entry of provided that .this is guaranteed , for instance , when where .therefore , we can be more careful by taking the diagonal matrix with for and otherwise , and compute using the following matrix decomposition : where is the matrix if the entry of is nonzero , then and hence , only approximates by a weighted sum of trigonometric functions when it is safe to do so . figure[ fig : decomposition ] shows how the criterion partitions the entries of .a stable algorithm for evaluating schlmilch expansions follows , as we will now explain .( 1,0 ) ( 1,4 ) ( 0,0 ) ; ( 0,0 ) ( 0,0 ) ; ( 0.200,0.000)(0.216,0.366)(0.221,0.476)(0.226,0.581)(0.232,0.682)(0.237,0.778)(0.242,0.870)(0.247,0.957)(0.253,1.042)(0.258,1.122)(0.263,1.200)(0.268,1.275)(0.274,1.346)(0.279,1.415)(0.284,1.481)(0.289,1.545)(0.295,1.607)(0.300,1.667 ) (0.385,2.400)(0.513,3.050)(0.641,3.440)(0.769,3.700)(0.897,3.886)(1.026,4.025)(1.154,4.133)(1.282,4.220)(1.410,4.291)(1.538,4.350)(1.667,4.400)(1.795,4.443)(1.923,4.480)(2.051,4.513)(2.179,4.541)(2.308,4.567)(2.436,4.589)(2.564,4.610)(2.692,4.629)(2.821,4.645)(2.949,4.661)(3.077,4.675)(3.205,4.688)(3.333,4.700)(3.462,4.711)(3.590,4.721)(3.718,4.731)(3.846,4.740)(3.974,4.748)(4.103,4.756)(4.231,4.764)(4.359,4.771)(4.487,4.777)(4.615,4.783)(4.744,4.789)(4.872,4.795)(5.000,4.800 ) ; at ( 1.5,2 ) ( l1 ) ; at ( -.1,4.2 ) ( l2 ) ; at ( 5,4.9 ) ( l2 ) ; at ( 0.1,0 ) ( l2 ) ; at ( 0.1,4 ) ( l2 ) ; at ( 1,4.9 ) ( l2 ) ; by construction all the entries of are less than in magnitude so its contribution to the matrix - vector product can be ignored , i.e. , we make the approximation the matrix - vector product can be computed in operations using dcts and dsts by .moreover , the number of nonzero entries in is and hence , the matrix - vector product can be computed in operations using direct summation .this algorithm successfully stabilizes the matrix - vector product that results from as it only employs the asymptotic expansion for entries of for which is sufficiently large . in practice , this algorithm is faster than direct summation for ( see figure [ fig : schlomilchresults ] ) .however , the algorithm has an complexity because the criterion is in hindsight too cautious .we can do better .note that the asymptotic expansion is accurate for all entries of except for of them , where is the indicator function and the last equality is from the leading asymptotic behavior of the harmonic number as .therefore , a lower complexity than seems possible if we use the asymptotic expansion for more entries of .roughly speaking , we need to refine the partitioning of so that the black curve , , in figure [ fig : decomposition ] is better approximated . first , we are only allowed to partition with rectangular matrices since for those entries that we do employ the asymptotic expansion we need to be able to use dcts and dsts to compute matrix - vector products .second , there is a balance to be found since each new rectangular partition requires more dcts and dsts .too many partitions and the cost of computing dcts and dsts will be overwhelming , but too few and the cost of computing will dominate . these two competing costs must be balanced . to find the balance we introduce a parameter . roughly speaking , if then we partition as much as possible and the asymptotic expansion is used for every entry satisfying .if then we do not refine and we keep the algorithm from section [ sec : singlepartition ] . an intermediate value of will balance the two competing costs . figure [ fig : hdecomposition ] shows the partition of that we consider and it is worth carefully examining that diagram .the partition corresponds to the following matrix decomposition : where is the number of partitions , is the diagonal matrix in section [ sec : singlepartition ] , and the matrices and are diagonal matrices with note that if the entry of the matrix is nonzero , then and .hence , and .similarly , if the entry of is nonzero , then . therefore , we are only employing the asymptotic expansion on entries of for which it is accurate and numerically stable to do so .( 0.23255,0 ) ( 0.23255,.7 ) ( 0.43478,.7 ) ( 0.43478,2.7 ) ( 1,2.7 ) ( 2.3,4 ) ( 2.3,4.56522 ) ( 4.3,4.56522 ) ( 4.3,4.76744 ) ( 5,4.76744 ) ( 0,0 ) ; ( 0,0 ) ( 0,0 ) ; ( 0.200,0.000)(0.216,0.366)(0.221,0.476)(0.226,0.581)(0.232,0.682)(0.237,0.778)(0.242,0.870)(0.247,0.957)(0.253,1.042)(0.258,1.122)(0.263,1.200)(0.268,1.275)(0.274,1.346)(0.279,1.415)(0.284,1.481)(0.289,1.545)(0.295,1.607)(0.300,1.667 ) (0.385,2.400)(0.513,3.050)(0.641,3.440)(0.769,3.700)(0.897,3.886)(1.026,4.025)(1.154,4.133)(1.282,4.220)(1.410,4.291)(1.538,4.350)(1.667,4.400)(1.795,4.443)(1.923,4.480)(2.051,4.513)(2.179,4.541)(2.308,4.567)(2.436,4.589)(2.564,4.610)(2.692,4.629)(2.821,4.645)(2.949,4.661)(3.077,4.675)(3.205,4.688)(3.333,4.700)(3.462,4.711)(3.590,4.721)(3.718,4.731)(3.846,4.740)(3.974,4.748)(4.103,4.756)(4.231,4.764)(4.359,4.771)(4.487,4.777)(4.615,4.783)(4.744,4.789)(4.872,4.795)(5.000,4.800 ) ; at ( 1.5,2 ) ( l1 ) ; at ( 2.4,4.25 ) ( l11 ) ; at ( .42,1.34 ) ( l11 ) ; at ( -.1,4.2 ) ( l2 ) ; at ( 0.1,4.5 ) ( l2 ) ; at ( 0.1,4 ) ( l2 ) ; at ( 0.1,2.7 ) ( l2 ) ; at ( 0.1,.7 ) ( l2 ) ; at ( 0.1,0 ) ( l2 ) ; at ( 5,4.9 ) ( l2 ) ; at ( .9,4.9 ) ( l2 ) ; at ( .35,4.9 ) ( l2 ) ; at ( 5 - 2.7,4.9 ) ( l2 ) ; at ( 4.3,4.9 ) ( l2 ) ; ( 0.43478,0)(0.43478,1 ) ; ( 1,0)(1,3 ) ; ( 4,4.56522)(5,4.56522 ) ; ( 2,4)(5,4 ) ; each matrix - vector product of the form requires dcts and dsts from and hence , costs operations .there are a total of matrices of this form in so exploiting the asymptotic expansion requires operations .the cost of the matrix - vector product is proportional to the number of nonzero entries in . ignoring the constant ,this is approximately ( by counting the entries in rectangular submatrices of , see figure [ fig : hdecomposition ] ) where the last equality follows from the assumption that . to balance the competing cost of exploiting the asymptotic expansion with the cost of computing , we set and find that .moreover , to ensure that the assumption holds we take .thus , the number of partitions should slowly grow with to balance the competing costs . with these asymptotically optimal choices of and the algorithm for computing the matrix - vector product via requires operations . more specifically , the complexity of the algorithm is since , see figure [ fig : schlomilchresults ] .though , the implicit constants in and do not change the complexity of the algorithm , they must still be decided on . after numerical experiments we set and for computational efficiency reasons .table [ tab : algorithmicparameters ] summarizes the algorithmic parameters that we have carefully selected .the user is required to specify the problem with an integer , a vector of expansion coefficients in , and then provide a working accuracy .all other algorithmic parameters are selected in a near - optimal manner based on analysis .ccc ' '' '' parameter ' '' '' & short description & formula + ' '' '' ' '' '' & number of terms in & + & & see table [ tab : smtable ] + & partitioning parameter & + & refining parameter & + & number of partitions & + [ rmk : gammashift ] a fast evaluation scheme for , where is a constant , immediately follows from this section .the constants and in and should be replaced by diagonal matrices and , where and , respectively .we now compare three algorithms for evaluating schlmilch expansions : ( 1 ) direct summation ( see section [ sec : existing ] ) ; ( 2 ) our algorithm ( see section [ sec : singlepartition ] ) , and ; ( 3 ) our algorithm ( see section [ sec : recursivepartition ] ) .the three algorithms have been implemented in matlab and are publicly available from .figure [ fig : schlomilchresults ] ( left ) shows the execution time for the three algorithms for evaluating schlmilch expansions for . here, we select the working accuracy as and use table [ tab : algorithmicparameters ] to determine the other algorithmic parameters .we find that our algorithm in section [ sec : recursivepartition ] is faster than direct summation for and faster than our algorithm for .in fact , for the number of partitions , , is selected to be and the algorithm in section [ sec : recursivepartition ] is identical to the algorithm in section [ sec : singlepartition ] . for a relaxed working accuracy of or algorithm becomes even more computationally advantageous over direct summation .figure [ fig : schlomilchresults ] ( right ) shows the execution time for our algorithm with , , and working accuracies . for each is a choice of that minimizes the execution time .numerical experiments like these motivate the choice in .schlomilchtimings ( 41,53 ) ( 53,48 ) ( 51,27 ) ( 50,0 ) ( 0,20 ) optimalm ( 50,0 ) ( 0,20 ) ( 32,44 ) ( 22,28.2 ) ( 20,11 )in this section we adapt the algorithm in section [ sec : recursivepartition ] to an evaluation scheme for expansions of the form : , \label{eq : fourierbessel}\ ] ] where is an integer and is the positive root of . for ,is the fourier bessel expansion of order of the function \rightarrow \mathbb{c}$ ] .the algorithm that we describe evaluates at for , which is equivalent to computing the matrix - vector product , where is a column vector with entries . developing a fast algorithm for computing the matrix - vector product is a challenge for fft - based algorithms because the positive roots of are not equally - spaced . here , we are able to derive a fast evaluation scheme by considering as a perturbed equally - spaced grid ( see section [ sec : besselrootsperturbed ] ) .first , we derive a matrix decomposition for , where , , and are column vectors .afterwards , we will consider as a perturbation of . for integers and and for column vectors , , and of length , we have the following matrix decomposition : [ thm : besseldecomposition ] we prove by showing that the entry of the matrix on the left - hand side and right - hand side are equal for .pick any . by the neumann addition formula we have moreover , by applying the taylor series expansion of bessel functions we have by substituting into we obtain andthe result follows .we now wish to write down a particular instance of that is useful for computing . by lemma [lem : besselinequalities ] we can write , where and is a number such that .in vector notation we have .hence , by proposition [ thm : besseldecomposition ] we have this is a useful matrix decomposition since each term in the double sum can be applied to a vector in operations .the diagonal matrices and have matrix - vector products and has matrix - vector products ( see section [ sec : recursivepartition ] and remark [ rmk : gammashift ] ) . however , for to be practical we must truncate the inner and outer sums .let be an integer and truncate the outer sum in to .by lemma [ lem : neumannbound ] , with and , the truncated neumann addition formula is still accurate , up to an error of , for the entry of provided that solving for we find that that is , once we truncate the outer sum in to the matrix decomposition is still accurate for all the columns of except the first .for example when , and hence , once the outer sum is truncated we must not use on the first columns of . based on numerical experiments we pick to be the smallest integer so that and hence , .next , we let be an integer and truncate the inner sum to . by section [ sec : besseltaylor ]the truncated taylor series expansion is accurate , up to an error of , for the entry of provided that where the minimization is taken over , instead of , because of the relation ( * ? ? ? * ( 10.4.1 ) ) . solving for find that thus , once the inner sum is truncated in to the decomposition is accurate for all but the first columns of .for example when , and we must not use on the first columns of . in practice , we select to be the smallest integer so that . to avoid employing the decomposition with truncated sums on the first columns of , we partition the vector .that is , we write , where first , we compute using with truncated sums in operations and then compute using direct summation in operations . at firstit seems that computing is times the cost of evaluating a schlmilch expansion , since there are terms in .however , the computation can be rearranged so the evaluation of is only times the cost . for and , the valuesthat we take when , we have and so there is a significant computational saving here .since ( * ? ? ?* ( 10.4.1 ) ) we have d_{\underline{b}}^{2t+s}\underline{c}\\ & = \!\!\!\sum_{u=0}^{2t+k-3}\!\!\!\!\ !d_{\underline{r}}^{u}\!\ ! \left[\!\sum_{t=\max(\lceil \frac{u - k+1}{2}\rceil,0)}^{\min(\lfloor \frac{u}{2}\rfloor , t-1)}{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!''\,\,\,\,\,\ , } \frac{(-1)^t2^{-u}}{t!(u - t)!}\!\ ! \left[\mathbf{j}_{\nu - u+2t}(\ , \underline{r}\,\tilde{\underline{\omega}}^\intercal ) + ( -1)^{u-2t}\mathbf{j}_{\nu+u-2t}(\ , \underline{r}\,\tilde{\underline{\omega}}^\intercal ) \right ] \! \right ] \!\ !d_{\underline{b}}^{u}\underline{c},\\ \end{aligned}\ ] ] where the single prime on the sum indicates that the first term is halved and the double prime indicates that the last term is halved . here, the last equality follows by letting and performing a change of variables .now , for each , the inner sum can be computed at the cost of one application of the algorithm in section [ sec : recursivepartition ] . since each evaluation of a schlmilch expansion costs operations and , a fourier bessel evaluation costs operations .figure [ fig : fbresults1 ] ( left ) shows the execution time for evaluating fourier bessel expansions of order with direct summation and our algorithm with working accuracies of .our algorithm is faster than direct summation for when .relaxing the working accuracy not only decreases and , but also speeds up the evaluation of each schlmilch expansion . when , our algorithm is faster than direct summation for .the algorithmic parameters are selected based on careful analysis so that the evaluation scheme is numerically stable and the computed vector has a componentwise accuracy of , where is the vector 1-norm . in figure[ fig : fbresults1 ] ( right ) we verify this for by considering the error , where is the absolute maximum vector norm and is the vector of values computed in quadruple precision using direct summation . for these experimentswe take the entries of as realizations of independent gaussians so that .fourierbesseltimings ( 47,53 ) ( 43.5,40 ) ( 50,-1 ) ( -.5,20 ) fourierbesselerrors ( 45,29.5 ) ( 45,46.5 ) ( 45,61.5 ) ( 50,-1 ) ( -4,23 )we now adapt the algorithm in section [ sec : fourierbessel ] to the computation of the discrete hankel transform ( dht ) of order , which is defined as where is the positive root of .it is equivalent to the matrix - vector product and is more difficult for fft - based algorithms than the task in section [ sec : fourierbessel ] because it may be regarded as evaluating a fourier bessel expansion at points that are not equally - spaced . to extend our algorithm to this setting we consider the ratios as a perturbed equally - spaced grid . by lemma [ lem : besselinequalities ]we have we can write this in vector notation as , where . since and , we have by theorem [ thm : besseldecomposition ] the following matrix decomposition : each term in can be applied to a vector in operations .the diagonal matrices and have matrix - vector products and by section [ sec : fourierbessel ] the matrix - vector product can be computed in operations .so to compute pad the vector with zeros , use the algorithm in section [ sec : fourierbessel ] with replaced by , and then only keep for .] however , for to be practical we must truncate the double sum .let and be integers and consider truncating the sum in to . using similar reasoning to that in section [ sec : fourierbessel ] , if we truncate then the matrix decomposition is accurate , up to an error of , provided that and where we used the bound for .equivalently , provided that where and are defined in and , respectively .that is , after truncating the sums in we can use the decomposition on all the entries of except for those in the first rows . in practice , we take and to be the smallest integers so that . to avoid employing a truncated version of on the first few rows we compute using with sums in operations ,discard the first entries of , and then compute the discarded entries using direct summation in operations . in the same way as section [ sec :rearrange ] , we can rearrange the computation so that only , instead of , fourier bessel evaluations are required .each fourier bessel evaluation requires schlmilch evaluations .hence , a discrete hankel transform requires schlmilch evaluations .since , our algorithm for the dht requires operations .the exposition in this section may suggest that the same algorithmic ideas continue to work for . while this is correct , the computational performance rapidly deteriorates as increases .this is because the vector of bessel roots and the vector of ratios are distributed less like an equally - spaced grid for .this significantly increases the number of terms required in the taylor expansion and neumann addition formula . for large ,different algorithmic ideas should be employed , for example , one should take into account that can be very small when .it is expected that the matched asymptotic ( transition ) region of , when , will be the most difficult to exploit when deriving a fast algorithm .we have implemented our algorithm for computing the dht in matlab .it is publicly available from .figure [ fig : hankelresults1 ] ( left ) shows the execution time for computing the dht of order with direct summation and our algorithm with working tolerances of . our algorithm is faster than direct summation for when , for when , and for when .figure [ fig : hankelresults1 ] ( right ) verifies that the selected algorithmic parameters do achieve the desired error bound of , where is the vector of the computed values of and is the vector of values computed in quadruple precision using direct summation . in this experimentwe take the entries of to be realizations of independent gaussians ( with mean and variance ) so that . dhttimings ( 60,55 ) ( 46,30 ) ( 50,-1 ) ( 0,18 ) dhterrors ( 45,30 ) ( 45,47 ) ( 45,62 ) ( 50,-1 ) ( -5,22 ) the dht is , up to a diagonal scaling , its own inverse as .in particular , by we have , as , where is a diagonal matrix with entries , and is the identity matrix .we can use the relation to verify our algorithm for the dht .figure [ fig : hankelresults2 ] shows the error , where is a column vector with entries drawn from independent gaussians and for small , will be large because only holds as .however , even for large , we observe that grows like .this can be explained by tracking the accumulation of numerical errors .since the entries of are drawn from independent gaussians and we have and hence , we expect . by the same reasoningwe expect .finally , this gets multiplied by in so we expect . if in practice we desire , then it is sufficient to have .dhtinverse ( 52,32 ) ( 23,36 ) ( 50,-1.5 ) ( -4,27 )an asymptotic expansion of bessel functions for large arguments is carefully employed to derive a numerically stable algorithm for evaluating schlmilch and fourier bessel expansions as well as computing the discrete hankel transform .all algorithmic parameters are selected based on error bounds to achieve a near - optimal computational cost for any accuracy goal . for a working accuracy of ,the algorithm is faster than direct summation for evaluating schlmilch expansions when , fourier bessel expansions when , and the discrete hankel transform when .i thank the authors and editors of the digital library of mathematical functions ( dlmf ) . without itthis paper would have taken longer .i also thank the referees for their time and consideration .6 , _ a fast algorithm for the evaluation of legendre expansions _ , siam j. sci ., 12 ( 1991 ) , pp .158179 . ,_ algorithm 644 : a portable package for bessel functions of a complex argument and nonnegative order _ , acm trans . math ., 12 ( 1986 ) , pp .265273 . , _the fast hankel transform as a tool in the solution of the time dependent schrdinger equation _ , j. comput .phys . , 59 ( 1985 ) , pp .136151 . , _ dual algorithms for fast calculation of the fourier - bessel transform _ , ieee trans .. , 29 ( 1981 ) , pp .963972 . , _ algorithms to numerically evaluate the hankel transform _ , computers and mathematics with applications , 26 ( 1993 ) , pp. 112 . , _ fast computation of zero order hankel transform _, j. franklin institute , 316 ( 1983 ) , pp .317326 . , _ computation of quasi - discrete hankel transforms of integer order for propagating optical wave fields _ , j. opt .a , 21 ( 2004 ) , pp .5358 . , _ a fast , simple , and stable chebyshev legendre transform using an asymptotic formula _, siam j. sci .comput . , 36 ( 2014 ) ,, _ an algorithm for the convolution of legendre series _ , siam j. sci . comput . , 36 ( 2014 ) , a1207a1220 ., _ a fast fft - based discrete legendre transform _ , submitted ., _ die cylinderfunctionen erster und zweiter art _ann . , 1 ( 1869 ) , pp .467501 . , _bounds for zeros of some special functions _ , proc .soc . , 25 .1970 , pp . 7274 ., _ a hankel transform approach to tomographic image reconstruction _, ieee transactions on medical imaging , 7 ( 1988 ) , pp .5972 . , _ an algorithm for the fast hankel transform _ , yale technical report , august 1995 . , _ an improved method for computing a discrete hankel transform _ ,computer physics communications , 43 ( 1987 ) , pp .181202 . , _ an improvement on orszag s fast algorithm for legendre polynomial transform _ , trans .processing soc .japan , 40 ( 1999 ) , pp ., _ a fourier bessel transform method for efficiently calculating the magnetic field of solenoids _, j. comput .phys . , 37 ( 1980 ) , pp .4155 . , _ nist handbook of mathematical functions _ , cambridge university press , 2010 ., _ an algorithm for the rapid evaluation of special function transforms _ , appl .comput . harm ., 28 ( 2010 ) , pp .203226 . , _approximation for bessel functions and their application in the computation of hankel transforms _ , comput .8 ( 1982 ) , pp . 305311 . , _transforms and applications handbook , third edition _ , crc press , 2010 . , _ fast algorithms for spherical harmonic expansions _ , siam j. sci ., 27 ( 2006 ) , pp .19031928 . , _ note sur la variation des constantes arbitraires dune intgrale dfinie _ , journal fr die reine und angewandte mathematik , 33 ( 1846 ) , pp .268280 . ,_ numerical evaluation of spherical bessel transforms via fast fourier transforms _ , j. comput .phys . , 100 ( 1992 ) , pp .294296 . , fastasytransforms , http://github.com/ajt60gaibb/fastasytransforms . , _ fast algorithms for spherical harmonic expansions , ii _ , j. comput .phys . , 227 ( 2008 ) , pp .42604279 . , http://functions.wolfram.com/03.01.06.0037.01 . | a fast and numerically stable algorithm is described for computing the discrete hankel transform of order as well as evaluating schlmilch and fourier bessel expansions in operations . the algorithm is based on an asymptotic expansion for bessel functions of large arguments , the fast fourier transform , and the neumann addition formula . all the algorithmic parameters are selected from error bounds to achieve a near - optimal computational cost for any accuracy goal . numerical results demonstrate the efficiency of the resulting algorithm . hankel transform , bessel functions , asymptotic expansions , fast fourier transform 65r10 , 33c10 |
before we come to identify the origin of the appearance of quantum structures in nature , we want to expose a macroscopic machine that produces quantum structure .we shall make use intensively of the internal functioning of this machine to demonstrate our general explanation .the machine that we consider consists of a physical entity that is a point particle that can move on the surface of a sphere , denoted , with center and radius .the unit - vector where the particle is located on represents the state of the particle ( see fig .for each point , we introduce the following experiment .we consider the diametrically opposite point , and install a piece of elastic of length 2 , such that it is fixed with one of its end - points in and the other end - point in .once the elastic is installed , the particle falls from its original place orthogonally onto the elastic , and sticks on it ( fig 1,b ) . 0.7 cm 2.2 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig . 1 : a representation of the quantum machine . in ( a ) the physical entity in state in the point , and the elastic corresponding to the experiment is installed between the two diametrically opposed points and . in (b ) the particle falls orthogonally onto the elastic and sticks to it . in ( c )the elastic breaks and the particle is pulled towards the point , such that ( d ) it arrives at the point , and the experiment gets the outcome . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ then the elastic breaks and the particle , attached to one of the two pieces of the elastic ( fig .1,c ) , moves to one of the two end - points or ( fig . 1,d ) . depending on whether the particle arrives in ( as in fig .1 ) or in , we give the outcome or to . in figure 2we represent the experimental process connected to in the plane where it takes place , and we can easily calculate the probabilities corresponding to the two possible outcomes . in order to do so we remark that the particle arrives in when the elastic breaks in a point of the interval , and arrives in when it breaks in a point of the interval ( see fig .we make the hypothesis that the elastic breaks uniformly , which means that the probability that the particle , being in state , arrives in , is given by the length of ( which is ) divided by the length of the total elastic ( which is 2 ) .the probability that the particle in state arrives in is the length of ( which is ) divided by the length of the total elastic .if we denote these probabilities respectively by and we have : the probabilities that we find in this way are exactly the quantum probabilities for the spin measurement of a spin 1/2 quantum entity , which means that we can describe this macroscopic machine by the ordinary quantum formalism with a two dimensional complex hilbert space as the carrier for the set of states of the entity . 0.7 cm 3.8 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig .2 : a representation of the experimental process in the plane where it takes place . the elastic of length 2 , corresponding to the experiment , is installed between and .the probability , , that the particle ends up in point is given by the length of the piece of elastic divided by the length of the total elastic .the probability , , that the particle ends up in point is given by the length of the piece of elastic divided by the length of the total elastic ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _already from the advent of quantum mechanics it was known that the structure of quantum theory is very different from the structure of the earlier existing classical theories .this structural difference has been expressed and studied in different mathematical categories , and we mention here some of the most important ones : ( 1 ) if one considers the collection of properties ( experimental propositions ) of a physical entity , then it has the structure of a boolean lattice for the case of a classical entity , while it is non - boolean for the case of a quantum entity ( birkhoff and von neumann 1936 , jauch 1968 , piron 1976 ) , ( 2 ) for the probability model , it can be shown that for a classical entity it is kolmogorovian , while for a quantum entity it is not ( foulis and randall 1972 , randall and foulis 1979 , 1983 , gudder 1988 , accardi 1982 , pitovski 1989 ) , ( 3 ) if the collection of observables is considered , a classical entity gives rise to a commutative algebra , while a quantum entity does nt ( segal 1947 , emch 1984 ) .the presence of these deep structural differences between classical theories and quantum theory has contributed strongly to the already earlier existing belief that classical theories describe the ordinary understandable part of reality , while quantum theory confronts us with a part of reality ( the micro - world ) that is impossible to understand . therefore still now there is the strong paradigm that _ quantum mechanics can not be understood_. the example of our macroscopic machine with a quantum structure challenges this paradigm , because obviously the functioning of this machine can be understood .the aim of this paper is to show that the main part of the quantum structures can indeed be explained in this way and the reason why they appear in nature can be identified . in this paperwe shall analyze this explanation , that we have named the hidden measurement approach , in the category of the probability models .we refer to ( aerts and van bogaert 1992 , aerts , durt and van bogaert 1993 , aerts , durt , grib , van bogaert and zapatrin 1993 , aerts 1994 , aerts and durt 1994a ) for an analysis of this explanation in other categories .the original development of probability theory was aiming at a formalization of the description of a probability that appears as the consequence of _ a lack of knowledge_. the probability structure appearing in situations of lack of knowledge was axiomatized by kolmogorov and such a probability model is now called kolmogorovian . since the quantum probability model is not kolmogorovian ,it has now generally been accepted that the quantum probabilities are _ not _ a description of a _ lack of knowledge_. sometimes this conclusion is formulated by stating that the quantum probabilities are _ ontological _ probabilities , as if they would be present in reality itself . in the hidden measurement approach we show that the quantum probabilities can be explained as being due to a _ lack of knowledge _ , and we prove that what distinguishes quantum probabilities from classical kolmogorovian probabilities is the _ nature of this lack of knowledge_. let us go back to the quantum machine to illustrate what we mean . if we consider again our quantum machine ( fig . 1 and fig .2 ) , and look for the origin of the probabilities as they appear in this example , we can remark that the probability is entirely due to a _ lack of knowledge _ about the measurement process .namely the lack of knowledge of where exactly the elastic breaks during a measurement .more specifically , we can identify two main aspects of the experiment as it appears in the quantum machine . 1 .the experiment effects a real change on the state of the entity .indeed , the state changes into one of the states or by the experiment .2 . the probabilities appearing are due to a _ lack of knowledge _ about a deeper reality of the individual measurement process itself , namely where the elastic breaks . these two effects give rise to quantum - like structures .the lack of knowledge about a deeper reality of the individual measurement process we have referred to as the presence of hidden measurements that operate deterministically in this deeper reality ( aerts 1986 , 1987 , 1991 ) , and that is the origin of the name that we gave to this approach .a consequence of this explanation is that quantum structures shall turn out to be present in many other regions of reality where the two mentioned effects appear .we think of the many situations in the human sciences , where generally the measurement disturbs profoundly the entity under study , and where almost always a lack of knowledge about the deeper reality of what is going on during this measurement process exists .in the final part of this paper we give some examples of quantum structures appearing in such situations .if the quantum structure can be explained by the presence of a lack of knowledge on the measurement process , we can go a step further , and wonder what types of structure arise when we consider the original models , with a lack of knowledge on the measurement process , and introduce a variation of the magnitude of this lack of knowledge .we have studied the quantum machine under varying lack of knowledge , parameterizing this variation by a number ] and ] on the set of states , such that is a -algebra of measurable subsets of , and for we have that is the probability that the state of the entity is in the subset .we have ( a ) , and ( b ) for sets such that for . what we call states are often called pure states , and what we call situations of lack of knowledge on the states are often called mixed states. even when the entity is in a state , and an experiment is performed , probability , defined as the limit of the relative frequency connected to an outcome , appears . for a fixed state ,the probability that an experiment gives an outcome in a subset , denoted by , can be described as a probability measure on the outcome set of the experiment .hence a map ] , where is the set of probability measures on .as we have done in ( aerts 1994 ) we introduce for an experiment the eigenstate sets , as maps , where for we have : we also introduce the possibility - state sets , as maps , where for we have : clearly we always have : [ theorem1 ] we consider an entity in a situation with lack of knowledge about the states described by the probability measure on the state space .for an arbitrary experiment and set of outcomes we have : proof : _ as defined we have that is the probability that the state of the entity is in the subset .if the state is in , the experiment gives with certainty an outcome in , and therefore .as defined we have that is the probability that the state of the entity is in .if the state is in the experiment has a possible outcome in , and therefore .we shall show in the next section that for a classical probability model , the two inequalities become equalities .but first we want to illustrate all these concepts on the quantum machine .it is easy to see how these concepts are defined for the quantum machine ( and also for the -model ) . for a considered experiment ( or in the -model ), we have an outcome set .the set of states is .the situations with lack of knowledge about the states are described by probability measures on the surface of the sphere .we have also described for an arbitrary ( see ( [ equation3 ] ) and ( [ equation4 ] ) ) . for the eigenstate sets and the possibility- state sets we have ( see fig .4 . ) : 0.7 cm 2.2 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig . 4 :we have represented the eigenstate sets and .if the state of the entity ( the position of the particle ) is in [ or in , then the experiment gives with certainty the outcome [ or with certainty the outcome ] .we also have represented the possibility state sets , [ , which is the collection of states where the entity gives a possible outcome [ ] . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we can consider the following specific cases : for = 1 we always have = 0 , and the -model reduces to the original quantum machine that we introduced in section 1 .it is a model for the spin of a spin 1/2 quantum entity .the transition probabilities are the same as the ones related to the outcomes of a stern - gerlach spin experiment on a spin 1/2 quantum particle , of which the quantum - spin - state in direction , denoted by , and the experiment corresponding to the spin experiment in direction , is described respectively by the vector and the self adjoint operator of a two - dimensional complex hilbert space . for the eigenstate sets and possibility - state sets we find : suppose that we consider a situation with lack of knowledge about the state , described by a uniform probability distribution on the sphere , which corresponds to a random distribution of the point on the sphere. then we can easily calculate the following probabilities : on the other hand , we have : which shows that the inequalities of theorem 1 ( see ( 8) ) are very strong in this quantum case .the classical situation is the situation without fluctuations .if = 0 , then can take any value in the interval ] .it is defined for an arbitrary subset as follows : now that we have introduced this concept of ` conditioning ' on an experiment , we can introduce the general concept of conditional probability .given a situation of lack of knowledge on the states of an entity , described by the probability measure , and given two experiments and , then we want to consider the conditional probability .this is the probability that the experiment makes occur an outcome in the set , when the situation is conditioned on the set for the experiment .the conditional probability is a map ] ,there are also given the two experiments and .0.7 cm 3.2 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig . 6 : an illustration of the situation corresponding to the conditional probability for the - model .the lack of knowledge about the state of the particle is described by a uniform probability distribution on the sphere . for a fixed and two parameters and in the the interval ] , such that the conditional probabilities , and can be written under the appropriate form .this means that there exists elements such that : we also have and .using the fact that is a probability measure , and ( [ equation30 ] ) and ( [ equation31 ] ) , and substituting the values of the conditional probabilities , we get : if we subtract ( [ equation33 ] ) from ( [ equation32 ] ) we find : which implies that on the other hand from ( [ equation34 ] ) follows the inequalities ( [ equation42 ] ) and ( [ equation43 ] ) deliver us the contradiction that we were looking for .the conclusion is that for these values of the conditional probabilities there does not exists a kolmogorovian probability model satisfying the bayes formula .also for the proof of the non - existence of a hilbertian model we proceed ex absurdum .if there exists a two dimensional complex hilbert space model , where the conditional probabilities are described by the transition probabilities , we can find three orthonormal basis , , such that : this means that there exists five angles and such that : if is an orthonormal basis , we have , and hence and also the complex conjugate if we multiply these last equations term by term we find : this we can write as but then we must have if we fill in the values and we find : from this contradiction we can conclude that there does not exist a two dimensional hilbert space model , such that the conditional probabilities can be described by transition probabilities in this hilbert space .this theorem shows that we really have identified a new region of probabilistic structure in this intermediate domain .in ( aerts d. and aerts s. 1994 b ) we show that for any value of different from , the probability structure of the -model is non kolmogorovian ( not satisfying bayes formula for the conditional probabilities ) .we also show that there is a domain of where a hilbert space model can be found , but another domain where this is not the case .we come now to the last section of this paper , where we like to give a sketch of how these non - kolmogorovian probabilities appear in other regions of nature .as follows from the foregoing analysis , non - classical experiments , giving rise to a non - classical structure , are characterized by the presence of non - predetermined outcomes .this makes it rather easy to recognize the non - classical aspects of experiments in other regions of reality .let us consider the situation of a decision process developing in the mind of a human being , and we refer to ( aerts d. and aerts .s. 1994 a , b ) for a more detailed description .hence our entity is a person , its states being the possible states of this person .experiments are questions that can be asked to the person , and on which she or he has to respond with yes or no. the typical situation of an opinion poll can be thought of as a concrete example .let us consider three different questions : 1 . : are you in favor of the use of nuclear energy ? 2 . : do you think it would be a good idea to legalize soft - drugs ? 3 . : do you think capitalism is better than social - democracy ?we have chosen such type of questions that many persons shall not have predetermined opinions about them .since the person has to respond with yes or no , she or he , without opinion before the questioning , shall form her or his opinion during this process of questioning .we can use the - model to represent this situation. 0.7 cm 0.7 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig . 7 : a representation of the question by means of the -model .we have indicated the three regions corresponding to predetermined answer yes , without predetermined answer , and predetermined answer no. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to simplify the situation , but without touching the essence , we make the following assumptions about the probabilities that are involved . we suppose that in all cases 50% of the persons have answered the question with yes , but only 15% of the persons had a predetermined opinion .this means that 70% of the persons formed their answer during the process of questioning . for simplicitywe make the same assumptions for and .we can represent this situation in the -model as shown in figure 7 .we also make some assumptions of the way in which the different opinions related to the three questions influence each other .we can represent an example of a possible interaction by means of the -model ( figure 9 ) .one can see how a person can be a strong proponent for the use of nuclear energy , while having no predetermined opinion about the legalization of soft drugs ( area 1 in figure 9 ) .area ( 4 ) corresponds to a sample of persons that have predetermined opinion in favor of legalization of soft drugs and in favor of capitalism . for area ( 10 )we have persons that have predetermined opinion against the legalization of soft drugs and against capitalism .all the 13 areas of figure 9 can be described in such a simple way .0.7 cm 3 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 9 pt fig . 8 : a representation of the three questions , , and by means of the -model .we have numbered the 13 different regions .for example : ( 1 ) corresponds to a sample of persons that have predetermined opinion in favor of nuclear energy , but have no predetermined opinion for both other questions , ( 4 ) corresponds to a sample of persons that have predetermined opinion in favor of legalization of soft drugs and in favor of capitalism , ( 10 ) corresponds to a sample of persons that have predetermined opinion against the legalization of soft drugs and against capitalism , ( 13 ) corresponds to the sample of persons that have no predetermined opinion about non of the three questions , etc ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ deliberately we have chosen the different fractions of people in such a way that the conditional probabilities fit into the -model for a value of .this means that we can apply theorem 4 , and conclude that the collection of conditional probabilities corresponding to these questions , and can neither be fitted into a kolmogorovian probability model nor into a quantum probability model .we are developing now in brussels a statistics for such new situations , that we have called interactive statistics. by means of this statistics it should be possible to make models for situations where part of the properties to be tested are created during the process of testing .the further development of an intermediate ( between classical and quantum ) probability theory and an interactive statistics could be very fruitful as well for physics as for other sciences .we work now at the construction of a general theory , where also the intermediate structures can be incoorporated ( aerts 1986 , aerts 1987 , aerts , durt and van bogaert 1993 , aerts 1994 , aerts and durt 1994a , 1994b , coecke 1994a , 1994b , 1994c ) .this theory can probably be used to describe the region of reality between microscopic and macroscopic , often referred to as the mesoscopic region .actually , physicists use a very complicated heuristic mixture of quantum and classical theories to construct models for mesoscopic entities .there is however no consistent theory , and a general intermediate theory could perhaps fill this gap .we try to find examples of simple physical phenomena in the mesoscopic region that could eventually be modeled by an -like model ( aerts and durt 1994 b ) .if we succeed in building this intermediate theory , we would not only have a new theory for the mesoscopic region , but the existence of such a theory would also shed light on old problems of quantum mechanics ( the quantum classical relation , the classical limit , the measurement problem , etc ... ) . in till now we have only been developing the kinematics of this intermediate theory ( aerts 1994 and aerts and durt 1994 a ) , but , once the kinematics is fully developed , the way to construct a dynamics for the intermediate region is straightforward. we can study the imprimitivity system related to the galilei group and look for representations of this galilei group in the group of automorphisms of the kinematical structure of the intermediate theory .if we can derive an evolution equation in this way , it should continuously transform with varying from the schrdinger equation ( ) , to the hamilton equations ( ) .as we have shown in the last section of this paper , the development of an interactive statistics could be of great importance for the human sciences , where often non - predetermined outcome situations appear .it could lead to a new methodology for these sciences .actually one is aware of the problem of the interaction between subject and object , but it is generally thought that this problems can not be taken into account in the theory. we also want to remark that the hidden measurement approach defines a new quantization procedure . starting from a classical mechanical entity , and adding lack of knowledge ( or fluctuations) on the measurement process, out of the classical entity appears a quantum - like entity .another problem that we are investigating at this moment is an attempt to describe quantum chaos by means of this new quantization procedure .it can be shown that the sensitive dependence on the initial conditions that can be found in the -model for the classical situation in the set of unstable equilibrium states disappears when the fluctuations on the measurement process increase .this could be the explanation for the absence of quantum chaos .accardi , l. ( 1982 ) , _ on the statistical meaning of the complex numbers in quantum mechanics , _nuovo cimento , * 34 * , 161 .aerts , d. ( 1987 ) , _ the origin of the non- classical character of the quantum probability model , _ in _ information , complexity , and control in quantum physics _ , a. blanquiere , et al ., eds . , springer - verlag .aerts , d. ( 1991 ) , _ a macroscopic classical laboratory situation with only macroscopic classical entities giving rise to a quantum mechanical probability model , _ in _ quantum probability and related topics , vol .vi , _ l. accardi , ed . ,world scientific , singapore .aerts , d. , durt , t. and van bogaert , b. ( 1992 ) , _ a physical example of quantum fuzzy sets , and the classical limit , _ in _ proceedings of the first international conference on fuzzy sets and their applications , tatra montains math .publ . _ * 1 * , 5. aerts , d. , durt , t. and van bogaert , b. ( 1993 ) , _ quantum probability , the classical limit and non - locality , _ in _ symposium on the foundations of modern physics _ , t. hyvonen , ed . , world scientific , singapore .aerts , d and durt , t. ( 1994 b ) , _ quantum , classical and intermediate ; a measurement model , _ , in the proceedings of the _ symposium on the foundations of modern physics , _ helsinki , 1994 , ed .laurikainen , c. montonen , k. sunnar borg , editions frontieres , gives sur yvettes , france . | we explain the quantum structure as due to the presence of two effects , ( a ) a real change of state of the entity under influence of the measurement and , ( b ) a lack of knowledge about a deeper deterministic reality of the measurement process . we present a quantum machine , where we can illustrate in a simple way how the quantum structure arises as a consequence of the two mentioned effects . we introduce a parameter that measures the size of the lack of knowledge on the measurement process , and by varying this parameter , we describe a continuous evolution from a quantum structure ( maximal lack of knowledge ) to a classical structure ( zero lack of knowledge ) . we show that for intermediate values of we find a new type of structure , that is neither quantum nor classical . we apply the model that we have introduced to situations of lack of knowledge about the measurement process appearing in other regions of reality . more specifically we investigate the quantum - like structures that appear in the situation of psychological decision processes , where the subject is influenced during the testing , and forms some of his opinions during the testing process . our conclusion is that in the light of this explanation , the quantum probabilities are epistemic and not ontological , which means that quantum mechanics is compatible with a determinism of the whole . fund and clea , brussels free university , krijgskundestraat 33 , 1160 brussels , belgium , e - mail : diraerts.ac.be |
major cellular activities are effectively governed by multiple protein - dna interactions .the starting point of these interactions is a process of protein searching and recognizing for the specific binding sites on dna .this is a critically important step because it allows a genetic information contained in dna to be effectively transferred by initiating various biological processes . in recent years, the fundamental processes associated with the protein search for targets on dna have been studied extensively using a wide variety of experimental and theoretical methods .although a significant progress in our understanding of the protein search phenomena has been achieved , the full description of the mechanisms remains a controversial and highly - debated research topic .experimental investigations of the protein search phenomena revealed that many proteins find their targets on dna very fast , and the corresponding association rates might exceed the estimates from 3d diffusion limits .these surprising phenomena are known as a _ facilitated diffusion _ in the protein search field .more recent single - molecule experiments , which can directly visualize the dynamics of individual molecules , also suggest that during the search proteins move not only through the bulk solution via 3d diffusion but they also bind _ non - specifically _ to dna where they hop in 1d fashion . several theoretical approaches that incorporate the coupling between 3d diffusion and 1d sliding in the protein search have been proposed , but they had a variable success in explaining all experimental observations .one of the most interesting problems related to the protein search on dna is the effect of multiple targets .the question is how long will it take for the proteins to find _ any _ specific binding site from several targets present on dna .naively , one could argue that in this case the search time should be accelerated proportionally to the number of targets , i.e. , the association reaction rate should be proportional to the concentration of specific binding sites .however , this effectively mean - field view ignores several important observations .first , it is clear that the search time for several targets lying very close to each other generally should not be the same as the search time for the same number of targets which are spatially dispersed .second , the experimentally supported complex 3d+1d search mechanism suggests that varying spatial distributions of the specific binding sites should also affect the search dynamics .thus , it seems that the simple mean - field arguments should not be valid for all conditions .surprisingly , this very important problem was addressed only in one recent work .hammar et al . using high - quality single - molecule measurements in the living cells investigated the dynamics of finding the specific sites for _ lac _ repressor proteins on dna with two targets .it was found that the association rates increase as a function of the distance between targets .an approximate theoretical model for the protein search with two targets was proposed. however , this theoretical approach has several problems .it was presented for infinitely long dna chains using a continuum approximation . at the same time , it was shown recently that the continuum approach might lead to serious errors and artifacts in the description of protein search dynamics .in addition , this theory predicted that the acceleration due to the presence of two targets in comparison with the case of only one target should disappear in the limit of very large sliding lengths .this is clearly a nonphysical result . in this limit, the protein spends most of the searching time on dna and it is faster to find any of two targets than one specific binding site . in this article ,we present a comprehensive theoretical method of analyzing the role of multiple targets in the protein search on dna . our approach is based on a discrete - state stochastic framework that was recently developed by one of us for the search with one specific binding site .it takes into account most relevant biochemical and biophysical processes , and it allows us to obtain fully analytical solutions for all dynamic properties at all conditions .one of the main results of the discrete - state stochastic method was a construction of dynamic phase diagram .three possible dynamic search regimes were identified .when the protein sliding length was larger than the dna chain length , the search followed simple random - walk dynamics with a quadratic scaling of the search time on the dna length . for the sliding length smaller than the dna length but larger than the the size of the specific binding site , the search dynamic followed a linear scalingwhen the sliding length was smaller than the target size , the search was dominated by nonspecific bindings and unbindings without the sliding along dna . in this paper , we extend this method to the case of several specific binding sites at arbitrary spatial positions .it allows us to explicitly describe the role of multiple targets and their spatial distributions in the protein search .our theoretical calculations agree with available experimental observations , and we also test them in monte carlo computer simulations .the original discrete - state stochastic approach can be generalized for any number of the specific binding sites at arbitrary positions along the dna chain . butto explain the main features of our theoretical method , we analyze specifically a simpler model with only two targets as shown in fig .1 . a single dna molecule with binding sites and a single protein molecule are considered .the analysis can be easily extended for any concentration of proteins and dna .two of the bindings sites and ( and ) are targets for the protein search ( see fig .the protein starts from the bulk solution that we label as a state . since 3d diffusionis usually much faster than other processes in the system , we assume that the protein can access with equal probability any site on the dna chain ( with the corresponding total binding rate ) . while being on dna , the protein can move with a diffusion rate along the chain with equal probability in both directions .the protein molecule can also dissociate from dna with a rate to the bulk solution ( fig .the search process ends when the protein reaches for the first time _ any _ of two targets .the main idea of our approach is to utilize first - passage processes to describe the complex dynamics of the protein search on dna .one can introduce a function defined as a probability to reach any target at time for the first time if initially ( at ) the protein molecule starts at the state ( ) .these first - passage probabilities evolve with time as described by a set of the backward master equations , +k_{off } f_0(t)-(2u+k_{off } ) f_n(t),\ ] ] for . at dna ends ( and ) the dynamics is slightly different , and in addition , in the bulk solution we have furthermore , the initial conditions require that the physical meaning of this statement is that if we start at one of two targets the search process is finished immediately .it is convenient to solve eqs .( [ me1 ] ) , ( [ me2 ] ) , ( [ me3 ] ) and ( [ me4 ] ) by employing laplace transformations of the first - passage probability functions , . the details of calculations are given in the appendix .it is important to note here that the explicit expressions for the first - passage probability distribution functions in the laplace form provide us with a full dynamic description of the protein search .for example , the mean first - passage time to reach any of the target sites if the original position of the protein was in the solution ( ) , which we also associate with the search time , can be directly calculated from as shown in the appendix , the average search time is given by }{k_{on } k_{off } s_{i}(0)},\ ] ] where is a new auxiliary function with a subscript specifying the number of targets ( for the system with two targets ) . for this functionwe have }{(1-y)(1+y^{2 m_1 -1})(1+y^{1 + 2(l - m_2)})(1+y^{m_2-m_1})},\ ] ] with it is important that for , as expected , our results reduce to expressions for the protein search on dna with only one target .similar procedures can be used to estimate all other dynamic properties for the system with two targets .we can extend this approach for any number of targets and for any spatial distribution of binding sites .this is discussed in detail in the appendix .surprisingly , the expression for the search times are the _ same _ in all cases but with different functions that depend on the number of specific binding sites and their spatial distributions. analytical results for for the protein search on dna with three or four targets , as well as a general procedure for arbitrary number of specific binding sites , are also presented in the appendix .because our theoretical method provides explicit formulas for all relevant quantities , it allows us to fully explore many aspects of the protein search mechanisms . the first problem that we can addressis related to the role of the spatial distribution of targets on the search dynamics . in other words ,the question is how the search time is influenced by exact positions of all targets along the dna .the results of our calculations for two specific binding sites are presented in fig .the longest search times are found when two targets are at different ends of the molecule , and the distance between them along the dna curve , , is the largest possible and equal to .the search is faster if targets are moved closer to each other and both distributed symmetrically with respect to the middle point of the dna molecule ( fig .2 ) . moving the targets too close ( ) starts to increase the search time again : see fig .2 . for short dna chains, it can be shown that there is an optimal distance between two targets , , that yields the fastest search ( fig .it corresponds to the most optimal positions of the specific sites to be at and .the last result is slightly unexpected since simple symmetry arguments suggest that the fastest search would be observed for the uniform distribution of targets , i.e. when the distance between the specific sites and the distance between the ends and targets are the same , i.e. , for and .this is not observed in fig .2 . to explain this, one can argue that the search on the dna molecule of length with targets can be mapped into the search on dna segments of variable lengths with only one target per each segment . in this case , positioning each target in the middle of the corresponding segment leads to the fastest search dynamics. this suggests that the most optimal distribution of symmetrically distributed targets is a uniform distribution with the distance between two neighboring targets equal to .but then the first and the last targets will be separated from the corresponding ends by a shorter distance .this is exactly what we see in fig . 2 for targets .the reason that distances between the ends and the closest targets deviate from the distances between the targets is the reflecting boundary conditions at the ends that are assumed in our model : see eqs .( [ me2 ] ) and ( [ me3 ] ) . the results presented in fig .2 also illustrate another interesting observation .increasing the length of dna effectively eliminates the minimum in the search time for specific symmetric locations of the targets .essentially , for , which is much closer to realistic conditions in most cases , any two position of the targets inside the dna chain will be optimal and will have the same search time as long as they are not at the ends .we will discuss the reason for this below .one of the main advantages of our method is the ability to explicitly analyze the search dynamics for all ranges of relevant parameters .this allows us to construct a comprehensive dynamic phase diagram that delineates different search regimes .the results are presented in fig . 3 for the systems with different numbers of specific binding sites .the important observation is that general features of the search behavior are independent of the number of targets . more specifically , there are three dynamic phases that depend on the relative values of the length of dna , the average scanning length and the size of the target ( taken to be equal to unity in our model ) . for the random - walk regimeis observed with the search time being quadraticaly proportional to the size of dna . in this case , the protein non - specifically binds dna and it does not dissociate until it finds one of the targets .the quadratic scaling is a result of a simple random - walk unbiased diffusion of the protein molecule on dna during the search . for the intermediate sliding regime , ,the protein binds to dna , scans it , unbinds and repeats this cycle at average times ( is number of the targets ) for symmetrically distributed specific sites . for more general distributionsthe number of search cycles is also proportional to .this leads to the linear scaling in the search times .for we have the jumping regime where the protein can bind to any site on dna and dissociate from it , but it can not slide along the dna chain .the search time is again proportional to because on average the protein must check sites .these changes in the dynamic search behavior are illustrated in fig .4 , in which the search times as a function of the dna lengths are presented for different scanning lengths . the slope variation indicates a change in the scaling behavior in the search times from to as the dna length increases for fixed .it is also important to note here that the concept of the most optimal positions of targets is not working for the sliding regime ( ) because the protein during the search frequently unbinds from the dna , losing all memory about what it already scanned .this concept also can not be defined in the jumping regime where the protein does not slide at all . from this point of view , any position of the targets are equivalent .the only two positions that differ from others are the end sites in the sliding regime .this is because they can be reached only via one neighboring site , while all other sites can be reached via two neighboring sites ( see fig .1 ) . the most interesting question for this system is to analyze quantitatively the effect of multiple targets on search dynamics . to quantify thiswe define a new function , , which we call an acceleration , this is a ratio of the search times for the case of one target and for the case of targets .the parameter gives a numerical value of how the presence of multiple targets increases the rate of association to any specific binding site .the results for acceleration are presented in figs .first , we analyze the situation when targets in all cases are in the most optimal symmetric positions , which is shown in fig .5 . for dna with the single target it is in the middle of the chain , while for dna with targets they are distributed uniformly , as we discussed above , with the distance between the internal targets and for boundary targets and dna ends .the acceleration for these conditions depends on the dynamic search regimes , and it ranges from to : see fig .5 . for the case of ( jumping and sliding regimes ) , on averagethe number of search excursions to dna before finding the specific site is equal to , and this leads to a linear behavior in the acceleration ( ) . for ( random - walk regime ) , the search is one - dimensional and the protein must diffuse on average the distance before it can find any of the targets .the quadratic scaling for the simple random walk naturally explains the acceleration in the search in this dynamic regime , .however , the acceleration is also affected by the distance between the targets .if we maintain the most optimal conditions for the dna with one target but vary the distance between multiple targets , while keeping the overall symmetry , the results are shown in fig .6 . in this case, putting targets too close to each other or moving them apart lowers the acceleration .eventually , there will be no acceleration for these conditions ( ) .but the results are much more interesting if we consider the non - symmetric distributions of targets .surprisingly , the search time for the system with multiple targets can be even slower than for the single target system ! this is shown in fig .7 where can be as low as 1/4 for the two - target system in the random - walk regime , or it can reach the value of 1/2 in the sliding and jumping regimes ( not shown ) . the single target in the most optimal position in the middle of the dna chain can be found much faster in comparison with the case of two targets seating near one of the ends .these observations suggest that the degree of acceleration of the search process due to the presence multiple targets is not always a linear function of the number of specific binding sites .it depends on the nature of the dynamic search phase , the distance between the targets and the spatial distribution of the targets .varying these parameters can lead to larger accelerations as well as to unexpected decelerations .it is a consequence of the complex mechanism of the protein search for targets on dna that combines 3d and 1d motions .this is the main result of our paper .recently , single - molecule experiments measured the facilitated search of _ lac _ repressor proteins on dna with two identical specific binding sites .these experiments show that the association rate increases before reaching the saturation with the increase in the distance between the targets .our theoretical model successfully describes these measurements , as shown in fig .fitting these data , we estimate the 1d diffusion rate for the lac repressors as s , which is consistent with _ in vitro _ measured values .our estimates for the sliding length , bp , and for the non - specific association to dna , s , also agree with experimental observations .it is important to compare our results with predictions from the theoretical model presented in ref .this continuum model was developed assuming that the length of dna is extremely long , .it was shown that the acceleration for the search for the case of two targets can be simply written as where is the distance between targets .the comparison between two theoretical approaches is given in figs .one can see from fig .9 that both models agree for very large dna lengths , , while for shorter dna chains there are significant deviations .the continuum theory predicts that the acceleration is always a linear or sub - linear function of the number of targets , i.e. , .our model shows that the acceleration can have a non - linear dependence on the number of the targets , .more specifically , this can be seen in fig .10 , where the acceleration is presented as a function of the scanning length .the prediction of the continuum theory that for the acceleration always approaches the unity is unphysical .clearly , if we consider , e.g. , the optimal distribution of targets , then the larger the number of specific binding sites , the shorter the search time .the reason for the failure of the continuum model at this limit is its inability to properly account for all dynamic search regimes .this analysis shows that the continuum model has a very limited application , while our theoretical approach is consistent with all experimental observations and provides a valid physical picture for all conditions .we investigated theoretically the effect of the multiple targets in the protein search for specific binding sites on dna .this was done by extending and generalizing the discrete - state stochastic method , originally developed for single targets , that explicitly takes into account the most important biochemical and biophysical processes . using the first - passage processes ,all dynamic properties of the system can be directly evaluated .it was found that the search dynamics is affected by the spatial distribution of the targets for not very long dna chains .there are optimal positions for specific sites for which the search times are minimal .we argued that this optimal distribution is almost uniform with a correction due to the dna chain ends .we also constructed a dynamic phase diagram for the different search regimes .it was shown that for any number of targets there are always three phases , which are determined by comparing the dna length , the scanning length and the size of the target .furthermore , we investigated the quantitative acceleration in the search due to the presence of multiple targets for various sets of conditions .it was found also that the acceleration is linearly proportional to the number of targets when the scanning length is less than the dna length . for larger scanning lengths ,the acceleration becomes faster with the quadratic dependence on the number of targets .however , changing the distances between the targets generally decreases the effect of acceleration .unexpectedly , we found that varying also the spatial distributions can reverse the behavior : it might take longer to find the specific site in the system with multiple targets in comparison with properly positioned single target .our model allows us to explain this complex behavior using simple physical - chemical arguments .in addition , we applied our theoretical analysis for describing experimental data , and it is shown that the obtained dynamic parameters are consistent with measured experimental quantities . a comparison between our discrete - state theoretical method the continuum model is also presented .we show that the continuum model has a limited range of applicability , and it produces the unphysical behavior at some limiting cases . at the same time , our approach is fully consistent at all sets of parameters .our theoretical predictions were also fully validated with monte carlo computer simulations .the presented theoretical model seems to be successful in explaining the complex protein search dynamics in the systems with multiple targets .one of the main advantage of the method is the ability to have a fully analytical description for all dynamic properties in the system .however , one should remember that this approach is still quite oversimplified , and it neglects many realistic features of the protein - dna interactions .for example , dna molecule is assumed to be frozen , different protein conformations that are observed in experiments are not taken into account , and the possibility of correlations between 3d and 1d motions is also not considered .it will be critically important to test the presented theoretical ideas in experiments as well as in more advanced theoretical methods .the work was supported by the welch foundation ( grant c-1559 ) , by the nsf ( grant che-1360979 ) , and by the center for theoretical biological physics sponsored by the nsf ( grant phy-1427654 ) .this appendix includes detailed derivations of the equations from the main text and explicit expressions for functions utilized in our calculations . to solve the backward master equations ( 1)-(5 ) for the system with two targets we use the laplace transformation which leads to +k_{off }\widetilde{f_0}(s);\ ] ] with the condition that we are looking for the solution of these equations in the form , , where and are unknown coefficients that will be determined after the substitution of the solution into eqs . ( [ l1 ] ) , ( [ l2 ] ) , ( [ l3 ] ) and ( [ l4 ] ) .this gives the following expression , +k_{off } \widetilde{f_0}(s).\ ] ] after rearranging , we obtain = ( s + k_{off } ) b - k_{off } \widetilde{f_0}(s).\ ] ] requiring that the right - hand - side of this expression to be equal to zero , yields since the parameter , we can find by solving or there are two roots of this quadratic equation , and with . the next step is to notice that two targets at the positions and divide the dna chain into 3 segments which can be analyzed separately .then the general solution should have the form with the parameter is specified by eq .( [ eqb ] ) and . using the corresponding boundary conditions, it can be shown that for while for we have and for this leads to the following expression for : where the auxiliary function is introduced via the following relation note that eq.([eqf0 ] ) is identical to the corresponding equation for the single - target case , but with the different auxiliary function .finally , we can obtain the explicit expressions for the search times as given in the main text in eq .the explicit form of the search time depends on the auxiliary functions , which can be directly evaluated .for example , for the two targets we have which after simplifications leads to eq .( 8) in the main text .similar analysis can be done for any number of targets with arbitrary positions along the chain .the final expression for the search times is the same in all cases [ given by the eq.(7 ) ] , but with the different auxiliary functions .when the protein molecule searches the dna with three targets ( ) , it can be shown that fig .1 . a general view of the discrete - state stochastic model for the protein search on dna with two targets . there are nonspecific and specific binding sites on the dna chain .a protein molecules can diffuse along the dna with the rate , or it might dissociate into the solution with the rate . from the solution , the protein can attach to any position on dna with the total rate .the search process is considered to be completed when the protein binds for the first time to any of two targets at the position or .normalized search times as a function of the normalized distance between two targets .the targets are positioned symmetrically with respect to the center of the dna chain .the parameters used in calculations are the following : s and s .the scanning length is varied by changing .solid curves are theoretical predictions , symbols are from monte carlo computer simulations .dynamic phase diagram for the protein search with multiple targets .search times as a function of the scanning length are shown for systems with one , two or three targets .the parameters used in calculations are the following : bp ; and s .the scanning length is varied by changing .protein search times as a function of dna length for different scanning lengths for the system with two targets .the parameters used in calculations are the following : s .solid curves are theoretical predictions , symbols are from monte carlo computer simulations .the scanning length is varied by changing .5 . acceleration in the search times as a function of the scanning length for the systems with two and three targets .the parameters used in calculations are the following : s .the scanning length is varied by changing .acceleration in the search times as a function of the normalized distance between the targets for the systems with two and three targets .the single target is in the middle of the dna chain .other targets systems are symmetric but not optimal .the parameters used in calculations are the following : s ; s and bp .acceleration in the search times as a function of the normalized distance between the targets for the systems with targets .the single target is in the middle of the dna chain . in the two - target system one of the specific binding sitesis fixed at the end and the position of the second one is varied .the parameters used in calculations are the following : s ; s and .9 . comparison of theoretical predictions for the acceleration as a function of the distance between the specific binding sites for the system with two targets for different dna lengths .targets are distributed symmetrically with respect to the middle of the dna chain .solid curves are discrete - state predictions , dashed curves are from the continuum model from ref .the parameters used in calculations are the following : s ; and s .comparison of theoretical predictions for the acceleration as a function of the scanning length for the system with two targets for different dna lengths .targets are in the most optimal symmetric positions .solid curves are discrete - state predictions , dashed curves are from the continuum model from ref .the parameters used in calculations are the following : s .the scanning length is varied by changing . | protein - dna interactions are crucial for all biological processes . one of the most important fundamental aspects of these interactions is the process of protein searching and recognizing specific binding sites on dna . a large number of experimental and theoretical investigations have been devoted to uncovering the molecular description of these phenomena , but many aspects of the mechanisms of protein search for the targets on dna remain not well understood . one of the most intriguing problems is the role of multiple targets in protein search dynamics . using a recently developed theoretical framework we analyze this question in detail . our method is based on a discrete - state stochastic approach that takes into account most relevant physical - chemical processes and leads to fully analytical description of all dynamic properties . specifically , systems with two and three targets have been explicitly investigated . it is found that multiple targets in most cases accelerate the search in comparison with a single target situation . however , the acceleration is not always proportional to the number of targets . surprisingly , there are even situations when it takes longer to find one of the multiple targets in comparison with the single target . it depends on the spatial position of the targets , distances between them , average scanning lengths of protein molecules on dna , and the total dna lengths . physical - chemical explanations of observed results are presented . our predictions are compared with experimental observations as well as with results from a continuum theory for the protein search . extensive monte carlo computer simulations fully support our theoretical calculations . |
quantum mechanics , after more than 70 years , still poses fundamental problems of understanding .many physicists believe these problems are ` only ' problems of ` physical interpretation ' of the mathematically well defined ` standard formalism ' . in this paperwe will show that this is not necessarily so .we will show that the problem of quantum mechanics connected to the existence of nonproduct states in the situation of the description of the joint entity of two quantum entities may well indicate that a change of the standard formalism is necessary .if we mention the ` standard formalism ' of quantum mechanics we mean the formalism as exposed , for example , in the book of john von neumann ( 1932 ) , and we will refer to it by sqm .often the name ` pure state ' is used to indicate a state represented by a ray of the hilbert space .we will however use it in the physical sense : a ` pure state ' of an entity represents the reality of this entity . as a consequence it is natural to demand that an entity ` exists ' if and only if at any moment it is in one and only one ` pure state ' .let us formulate this as a principle , since it will play a major role in our analysis .[ physprinc01 ] if we consider an entity , then this entity ` exists ' at a certain moment iff it ` is ' in one and only one pure state at that moment .we denote pure states of an entity by means of symbols and the set of all pure states by .we mention that in ( aerts , 1984a , 1999a ) , where aspects of the problem that we investigate in the present paper are analyzed , a ` pure state ' is called a ` state ' . a state represented by a ray of the hilbert spacewill be called a ` ray state ' .we denote rays by symbols , where and we denote the ` ray state ' represented by the ray . with each ray , ,corresponds one and only one ray state .one of the principles of standard quantum mechanics is that ` pure states ' are ` ray states ' .[ qmprinc01 ] consider an entity described by sqm in a hilbert space .each ray state , is a pure state of , and each pure state of is of this form .the problem that we want to consider in this paper appears in the sqm description by of the joint entity of two quantum entities and .[ qmprinc02 ] if we consider two quantum entities and described by sqm in hilbert spaces and , then the joint quantum entity consisting of these two quantum entities is described by sqm in the tensor product hilbert space .the subentities and are in ray states and , with and , iff the joint entity is in a ray state . in relation to the situation of a joint entity consisting of two subentitiesthere is another physical principle we generally imagine to be satisfied .[ physprinc02 ] if an entity is the joint entity of two subentities then the entity exists at a certain moment iff the subentities exist at that moment .let us introduce the concept of ` nonproduct vectors ' of the tensor product .for we say that is a nonproduct vector iff .we are now ready to formulate the theorem that points out the paradox we want to bring forward . physical principle [ physprinc01 ] , physical principle [ physprinc02 ] , sqm principle [ qmprinc01 ] and sqm principle [ qmprinc02 ] can not be satisfied together .[ theor01 ] proof : suppose the four principles are satisfied .this leads to a contradiction . consider the joint entity of two subentities and , described by sqm . from sqm principle [ qmprinc02 ]follows that if and are described in hilbert spaces and then is described in the hilbert space .let us consider a nonproduct vector and the ray state .from sqm principle [ qmprinc01 ] follows that represents a pure state of .consider a moment where is in state .physical principle [ physprinc01 ] implies that exists at that moment and from physical principle [ physprinc02 ] we infer that and also exist at that moment .physical principle [ physprinc01 ] then implies that and are respectively in pure states and . from sqm principle [ qmprinc01 ]it follows that there are two rays and , and such that and . from sqm principle [ qmprinc02 ] then follows that is in the state which is not since the ray generated by is different from .since both and are pure states , this contradicts physical principle [ physprinc01 ] .the fundamental problems of the sqm description of the joint entity of two subentities had already been remarked a long time ago . the einstein podolsky rosen paradox and later research on the bell inequalities are related to this difficulty ( einstein et al . , 1935 ;bell , 1964 ) .it are indeed states like , with a nonproduct vector , that give rise to the violation of the bell inequalities and that generate the typical epr correlations between the subentities .most of the attention in this earlier analysis went to the ` non local ' character of these epr correlations .the states of type are now generally called ` entangled ' states .the problem ( paradox ) related to entangled states that we have outlined in section [ sec01 ] has often been overlooked , although noticed and partly mentioned in some texts ( e.g. van fraassen , 1991 , section 7.3 and references therein ) .the problem of the description of a joint entity has also been studied within axiomatic approaches to sqm .there it was shown that some of the axioms that are needed for sqm are not satisfied for certain well defined situations of a joint entity consisting of two subentities ( aerts , 1982 , 1984a ; pulmannov , 1983 , 1985 ; randall and foulis , 1981 ) .more specifically , it has been shown in aerts ( 1982 ) that the joint entity of two separated entities can not be described by sqm because of two axioms : weak modularity and the covering law .this shortcoming of sqm is proven to be at the origin of the epr paradox ( aerts , 1984b , 1985a , b ) .it has also been shown that different formulations of the product within the mathematical categories employed in the axiomatic structures do not coincide with the tensor product of hilbert spaces ( aerts , 1984a ; pulmannov , 1983 , 1985 ; randall and foulis , 1981 ) . againcertain axioms , orthocomplementation , covering law and atomicity , are causing problems .all these findings indicate that we are confronted with a deep problem that has several complicated and subtle aspects .a very extreme attitude was to consider entangled states as artifacts of the mathematical formalism and hence not really existing in nature .meanwhile , entangled states are readily prepared in the laboratory and the corresponding epr correlations have been detected in a convincing way .this means that it is very plausible to acknowledge the existence of entangled states as ` pure states ' of the joint entity in the sense of physical principle [ physprinc01 ] . as a result of earlier researchwe have always been inclined to believe that we should drop physical principle [ physprinc02 ] to resolve the paradox ( aerts , 1984a , 1985a , b ; see also aerts , broekaert and smets , 1999 ) .this comes to considering the subentities and of the joint entity as ` not existing ' if the joint entity is in an entangled state .we still believe that this is a possible solution to the paradox and refer for example to valckenborgh ( 1999 ) and aerts and valckenborgh ( 1999 ) for further structural elaboration in this direction . in the present paper we would like to bring forward an alternative solution . to make it explicitwe introduce the concept of ` density state ' , which is a state represented by a density operator of the hilbert space .we denote density operators by symbols and the corresponding density states by . with each density operator on corresponds one and only one density state . within sqm density statesare supposed to represent mixtures , i.e. situations of lack of knowledge about the pure state .the way out of the paradox we propose in the present paper consists of considering the density states as pure states .hence , in this sense , sqm principle 1 would be false and substituted by a new principle .[ cqmprinc ] consider an entity described in a hilbert space .each density state , where is a density operator of , is a pure state of , and each pure state of is of this form. we call the quantum mechanics that retains all the old principles except sqm principle [ qmprinc01 ] , and that follows our new principle cqm principle [ cqmprinc ] , ` completed quantum mechanics ' and refer to it by cqm .the first argument for our proposal of this solution comes from earlier work in relation with the violation of bell inequalities by means of macroscopic entities ( aerts , 1991a ) .there we introduce a macroscopic material entity that entails epr correlations .let us briefly describe this entity again to state our point .first we represent the spin of a spin quantum entity by means of the elastic sphere model that we have introduced on several occasions ( aerts , 1986 , 1987 , 1991a , b , 1993 , 1995 , 1999a , b ) , and that we have called the ` quantum machine ' .it is well known that the states , ray states as well as density states , of the spin of a spin entity can be represented by the points of a sphere in three dimensional euclidean space with radius and center .let us denote the state corresponding to the point by . to make the representation explicit we remark that each vector can uniquely be written as a convex linear combination of two vectors and on the surface of the sphere ( fig .2 ) i.e. with $ ] and . in this waywe make correspond with the density operator : each density operator can be written in this form and hence the inverse correspondence is also made explicit .we remark that the ray states , namely the density operators that are projections , correspond to the points on the surface of .it is much less known that the experiments on the spin of a spin quantum entity can be represented within the same picture .let us denote the direction in which the spin will be measured by the diametrically opposed vectors and of the surface of ( fig . 1 ) , andlet us consider as the direction of the standard spin representation ( this does not restrict the generality of our calculation ) . in this case , in sqm , the spin measurement along , which we denote ,is represented by the self adjoint operator with being the spectral projections .the sqm transition probabilities , , the probability for spin up outcome if the state is , and , the probability for spin down outcome if the state is , are then : let us now show that , using the sphere picture , we can propose a realizable mechanistic procedure that gives rise to the same probabilities and can therefore represent the spin measurement .0.5 cm 2 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 7pt fig . 1 : a representation of the quantum machine . in ( a )the particle is in state , and the elastic corresponding to the experiment is installed between the two diametrically opposed points and . in ( b )the particle falls orthogonally onto the elastic and sticks to it . in ( c )the elastic breaks and the particle is pulled towards the point , such that ( d ) it arrives at the point , and the experiment gets the outcome . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ our mechanistic procedure starts by installing an elastic strip ( e.g. a rubber band ) of 2 units of length , such that it is fixed with one of its end - points in and the other end - point in ( fig .once the elastic is installed , the particle falls from its original place orthogonally onto the elastic , and sticks to it ( fig .then , the elastic breaks in some arbitrary point . consequently the particle , attached to one of the two pieces of the elastic ( fig . 1,c ) , is pulled to one of the two end - points or ( fig .now , depending on whether the particle arrives in ( as in fig .1 ) or in , we give the outcome _ ` up ' _ or _ ` down ' _ to this experiment .let us prove that the transition probabilities are the same as those calculated by sqm .the probability , that the particle ends up in point ( experiment gives outcome _ ` up ' _ ) is given by the length of the piece of elastic divided by the total length of the elastic .the probability , , that the particle ends up in point ( experiment gives outcome _ ` down ' _ ) is given by the length of the piece of elastic divided by the total length of the elastic .this means that we have : comparing ( [ probquant ] ) and ( [ probform ] ) we see that our mechanistic procedure represents the quantum mechanical measurement of the spin .0.5 cm 3.5 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 7pt fig . 2 : a representation of the experimental process .the elastic of length 2 , corresponding to the experiment , is installed between and .the probability , , that the particle ends in point under influence of the experiment is given by the length of the piece of elastic divided by the total length of the elastic .the probability , , that the particle ends in point is given by the length of the piece of elastic divided by the total length of the elastic ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to realize the macroscopic model with epr correlations we consider two such ` quantum machine ' spin models , where the point particles are connected by a rigid rod , which introduces the correlation .we will only describe the situation where we realize a state that is equivalent to the singlet spin state , where , and refer to aerts ( 1991a ) for a more detailed analysis .suppose that the particles are in states and where and are respectively the centers of the spheres and ( fig .3 ) connected by a rigid rod .we call this state ( the presence of the rod included ) .the experiment consists of performing in and in and collecting the outcomes _ ( up , up ) , ( up , down ) , ( down , up ) _ or _ ( down , down)_. in fig .3 we show the different phases of the experiment .we make the hypothesis that one of the elastics breaks first and pulls one of the particles _ up _ or _down_. then we also make the hypothesis that once that one of the particles has reached one of the outcomes , the rigid connection breaks down .the experiment continues then without connection in the sphere where the elastic is not yet broken .the joint probabilities can now easily be calculated : where is the angle between and .these are exactly the quantum probabilities when represents the singlet spin state . as a consequenceour model is a representation of the singlet spin state .this means that we can put . 0.5 cm 0.5 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 0.7 cm = 7pt fig . 3 : a macroscopic mechanical entity with epr correlations ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ why does this example inspire us to put forward the hypothesis that density states are pure states ?well , if we consider the singlet spin state , then this is obviously a nonproduct state , and hence the states of the subentities are density states .in fact they are the density states and where however , the state of the joint entity is clearly not given by the density state corresponding to the density operator because this state does not entail correlations .it is due the presence of the epr correlations that the state of the joint entity is represented by a ray state . in our macroscopic mechanistic modelhowever all the states ( also the states of the subentities ) are ` pure states ' and not mixtures ( remark that we use the concept ` pure state ' as defined in section [ sec01 ] ) . if our proposal were true , namely if density states as well as ray states in principle represented pure states , we could also understand why , although the state of the joint entity uniquely determines the states of the subentities , and hence physical principle [ physprinc02 ] is satisfied , the inverse is not true : the states of the subentities do not determine the state of the joint entity . indeed , a state of one subentity can not contain the information about the presence of an eventual correlation between the subentities . this way , it is natural that different types of correlations can give rise to different states of the joint entity , the subentities being in the same states .this possibility expresses the philosophical principle that the whole is greater than the sum of its parts and as our model shows it is also true in the macroscopic world .let us now say some words about the generality of the construction that inspired us for the proposed solution . it has been shown in coecke ( 1995a ) that a quantum machine like model can be realized for higher dimensional quantum entities .coecke ( 1995b , 1996 ) also showed that all the states of the tensor product can be realized by introducing correlations on the different component states .this means that we can recover all the nonproduct ray states of the tensor product hilbert space by identifying them with a product state plus a specific correlation for a general quantum entity and hence that our solution of the paradox is a possible solution for a general quantum entity .if we carefully analyze the calculations that show the equivalence of our model to the quantum model , we can understand why the distinction between ` interpreting density states as mixtures ' and ` interpreting density states as pure states ' can not be made experimentally .indeed , because of the linearity of the trace , used to calculate the quantum transition probabilities , and because the inner points of the sphere can be written as convex linear combinations of the surface points , an ontological situation of mixtures must give the same experimental results as an ontological situation of pure states . if we would be able to realize experimentally a nonlinear evolution of one of the subentities that has been brought into an entangled state with the other subentity as subentity of a joint entity , it would be possible to test our hypothesis and to detect experimentally whether density states are pure states or mixtures . indeed, suppose that we a nonlinear evolution of one of the entangled subentities could be realized .then , we can distinguish the two possibilities in the following way .if the density state of the entangled subentity is a mixture , then this state evolves while staying a convex linear combination of the ray states and ( referring to the situation of fig . 3 ) .the nonlinear evolution makes evolve the ray states and and this determines the evolution of the density state , but the correspondence between and and remains linear .if the density state of the entangled subentity is a pure state , then the nonlinear evolution will make it evolve independent of the way in which the ray states and evolve .this means that in general the relation between and and will not remain that of a convex linear combination .so we can conclude that for a nonlinear evolution the change of the density state of an entangled subentity under this evolution will be different depending on whether it is a mixture or a pure state .this difference can experimentally be detected by a proper experimental setup .we believe that such an experiment would be of great importance for the problem that we have outlined here .the quantum axiomatic approaches make use of piron s representation theorem where the set of pure states is represented as the set of rays of a generalized hilbert space ( piron , 1964 , 1976 ) . this theorem has meanwhile been elaborated and the recent result of solr has made it possible to formulate an axiomatics that characterizes sqm for real , complex or quaternionic hilbert spaces ( sler 1995 , aerts and van steirteghem 1999 ) .this standard axiomatic approach aims to represent pure states by rays of the hilbert space .if our proposal is true , an axiomatic system should be constructed that aims at representing pure states by means of density operators of the hilbert space . within the generalization of the geneva - brussels approach that we have formulated recently , and where the mathematical category is that of state property systems and their morphisms , such an axiomatic can be developed ( aerts , 1999a ; aerts et al .1999 ; van steirteghem , 1999 ; van der voorde 1999 ) . in aerts ( 1999c )we made a small step in the direction of developing such an axiomatic system by introducing the concept of ` atomic pure states ' and treating them as earlier the pure states were treated , aiming to represent these atomic pure states by the rays of a hilbert space .we proved that in this case the covering law remains a problematic axiom in relation to the description of the joint entity of two subentities ( theorem 18 of aerts 1999c ) .we are convinced that we would gain more understanding in the joint entity problem if a new axiomatic would be worked out , aiming to represent pure states by density operators of a hilbert space , and we are planning to engage in such a project in the coming years .diederik aerts is senior research associate of the belgian fund for scientific research .this research was carried out within the project bil 96 - 03 of the flemish governement .aerts , d. , ( 1982 ) .description of many physical entities without the paradoxes encountered in quantum mechanics , _ found .phys . _ , * 12 * , 1131 - 1170. aerts , d. , ( 1985b ) .the physical origin of the epr paradox and how to violate bell inequalities by macroscopic systems , in _ on the foundations of modern physics _ ,p. lathi and p. mittelstaedt , world scientific , singapore , 305 - 320 .aerts , d. , ( 1987 ) .the origin of the nonclassical character of the quantum probability model , in _ information , complexity , and control in quantum physics _ , eds .a. blanquiere , s. diner , and g. lochak , springer - verlag , wien - new york , 77 - 100 .aerts , d. , ( 1991a ) . a mechanistic classical laboratory situation violating the bell inequalities with , exactly in the same way as its violations by the epr experiments , _ helv .acta _ , * 64 * , 1 - 24 .aerts , d. , ( 1991b ) . a macroscopic classical laboratory situation with only macroscopic classical entities giving rise to a quantum mechanical probability model , in _ quantum probability and related topics , volume vi _ , ed .l. accardi , world scientific publishing , singapore , 75 - 85 .aerts , d. , ( 1999b ) .the stuff the world is made of : physics and reality , to appear in _einstein meets magritte : an interdisciplinary reflection _ , eds .d. aerts , j. broekaert and e. mathijs , kluwer academic , dordrecht , boston , london .aerts , d. , coecke , b. , dhooghe , b. , durt , t. and valckenborgh , f. , ( 1996 ) . a model with varying fluctuations in the measurement context , in _ fundamental problems in quantum physics ii _ , eds .ferrero , m. and van der merwe , a. , plenum , new york , 7 - 9 .randall , c. and foulis , d. , ( 1981 ) . operational statistics and tensor products , in _ interpretations and foundations of quantum theory _ , h. neumann , ed .wissenschaftsverslag , bibliographisches institut , mannheim , 21 . | we formulate a paradox in relation to the description of a joint entity consisting of two subentities by standard quantum mechanics . we put forward a proposal for a possible solution , entailing the interpretation of ` density states ' as ` pure states ' . we explain where the inspiration for this proposal comes from and how its validity can be tested experimentally . we discuss the consequences on quantum axiomatics of the proposal . |
the set of linear equations with variable weights , , in eq .[ eq1 ] can be rewritten as a system of linear equations with variables : where now is a matrix of real numbers , and .notice that the linear system in eq .[ eq : sistema ] has solutions since the rank of is ( all the equations are separated and each of the variables , , appears in one equation only ) , and the in - degree of all nodes is positive by definition .hence , there always exists such that eq .[ eq1 ] is satisfied .it is convenient to rewrite eq .[ eq : sistema ] in a form that emphasises the dependence of matrix from .we choose to label the arcs as follows : , denotes the -th arc entering node , where is the in degree of node .likewise , is the source of arc , while is the corresponding weight . using this notation , the -th component of eq .[ eq : sistema ] can be written as : by direct computation , one positive solution of eq .[ eq : eq2 ] is given by where , and by continuity there are infinite many solutions such that are all positive . in particular , if for node we have , then the -th equation of eq .[ eq : eq2 ] has a unique solution , while if , there are always infinitely many solutions depending on parameters . summing up , eq .( [ eq : sistema ] ) has only one solution if all the node in - degrees are equal to one , while there are , in general , infinitely many solutions depending on parameters .notice that can be different from , meaning that it is also possible to set the value of the largest eigenvalue of the weighted graph . here, we show that it is not necessary to fix the weights of all the graph links in order to get an arbitrary centrality vector .in fact , given a subset of links containing at least one incoming link for each node , it is sufficient to assign some positive weights to each , while keeping constant , for instance all equal to , such that the resulting weighted graph has eigenvector centrality equal to . without loss of generality we can assume that the first incoming links of each node belong to , so that the components of eq .[ eq : eq2 ] can be written as : therefore , since for each , then there is a such that for every and hence , by a similar continuity argument as above , we can ensure that there are infinitely many positive solutions to eq.[eq : eq3 ] .a _ controlling set _ of graph is any set of nodes such that : this means that , for each node in the graph , at least one of the two following conditions holds : _ a _ ) , or _ is pointed by at least one node in .we use to denote the size of the controlling set , i.e. the number of nodes contained in . finding the _ minimum controlling set_ of a graph , i.e. a controlling set having minimal size , is equivalent to computing the so - called _ domination number _ of .the domination number problem is a well known np - hard problem in graph theory .therefore , the size of the minimum controlling set can be determined exactly only for small graphs as those in fig .[ fig : real_social ] . to investigate larger graphswe have used two greedy algorithms .the first algorithm , called top down controller search ( tdcs ) , works as follows .we initially set .we select the node with the maximum out - degree in , and mark it as _ controller node_. then , all the nodes in the out - neighbourhood of are marked as _ controlled _ and are removed from , together with itself . in this way , we obtain a new graph , and we store the controller node , together with the list of nodes controlled by . notice that , removing a generic node from , also implies that does not contain any of the links pointing to or originating from it .the same procedure is iteratively applied to , and so on , until all the nodes of are either marked as controller or as controlled nodes .the algorithm produces a set , with , which is a controlling set of by construction .the second algorithm is called bottom up controller search ( bucs ) , and it works as follows .we set and consider the set containing all the nodes in with minimum in degree . for each node , we consider the set of nodes pointing to and select from this set the node with the maximal out degree .this node is marked as _controller_. then we obtain a new graph by removing from all the controller nodes for all , together with all the nodes , marked as _ controlled _ , pointed by them .the same procedure is iteratively applied to , and so on , until all the nodes of are either marked as controller or as controlled nodes .if a graph contains isolated nodes , these are marked as _ controller _ and removed from .the algorithm finally produces a set which is a controlling set of by construction .we have verified that the controlling sets obtained by both tdcs and bucs for each of the networks considered are much smaller than those obtained by randomly selecting the controlling nodes . moreover , the set of controller nodes found by tdcs is in general different from that obtained on the same network by bucs .also the sizes of the two controlling sets obtained by the two algorithms are different .in particular , we have noticed that in assortative ( disassortative ) networks the controlling set produced by tdcs is smaller ( larger ) than that produced by bucs .boccaletti , s. , latora , v. , moreno , y. , chavez , m. , hwang , d.u ., complex networks : structure and dynamics ._ phys . rep ._ * 424 , * 175 - 308 ( 2006 ) .arenas , a. , daz - guilera , a. , khurths , j. , moreno , y. , zhou , c. synchronization in complex networks ._ phys . rep ._ * 469 , * 93 - 153 ( 2008 ) .barrat , a. , barthlemy , m. , vespignani , a. _ dynamical processes in complex networks ._ ( cambridge university press , cambridge , 2008 ) .fortunato , s. community detection in graphs._phys . rep . _ * 486 , * 75 - 174 ( 2010 ) . bavelas , a. a mathematical model for group structures . _organ _ * 7 * , 16 ( 1948 ) .wasserman , s. , faust , k. _ social networks analysis . _ ( cambridge university press , cambridge , 1994 ) .jeong , h. , mason , s.p . ,barabsi , a .-, oltvai , z. n. lethality and centrality in protein networks ._ nature _ * 411 , * 41 - 42 ( 2001 ) .albert , r. , jeong , h. , barabsi , a .-l . error and attack tolerance of complex networks . _nature _ , * 406 , * 378 - 382 ( 2000 ) .crucitti , p. , latora , v. , porta , s. centrality measures in spatial networks of urban streets .e _ * 73 , * 036125 ( 2006 ) .barthlemy , m. spatial networks , _ phys . rep . _* 499 , * 1 - 101 ( 2011 ) .freeman , l.c .centrality in social network .conceptual clarification ._ social networks _ * 1 , * 215 - 239 ( 1979 ) .barthlemy , m. betweenness centrality in large complex networks .j. b _ * 38 , * 163 - 168 ( 2004 ) .perra , n. , fortunato , s. spectral centrality measures in complex networks .e _ * 78 , * 036107 ( 2008 ) .bonacich , p. factoring and weighting approaches to status scores and clique identification . _ j. math .* 2 , * 113 ( 1972 ) .bonacich , p. , lloyd , p.eigenvector - like measures of centrality for asymmetric relations .* 23 , * 191 - 201 ( 2001 ) .bonacich , p. power and centrality : a family of measures .j. sociol ._ * 92 , * 1170 - 1182 ( 1987 ) .katz , l. a new status index derived from sociometric analysis ._ psychometrika _ * 18 , * 39 - 43 ( 1953 ) .brin , s. , page , l. the anatomy of a large - scale hypertextual web search engine ._ comput .netw . _ * 30 , * 107 - 117 ( 1998 ) .delvenne , j .- c . , libert , a .- s . centrality measures and thermodynamicformalism for complex networks .e _ * 83 , * 046117 ( 2011 ) .gfeller , d. , de los rios , p. spectral coarse graining of complex networks ._ * 99 , * 038701 ( 2007 ) .fortunato , s. , flammini , a. , random walks on directed networks : the case of pagerank , _ int .j. bifurcat . chaos _ * 17 , * 2343 - 2353 ( 2007 ) .hlebec , v .recall versus recognition : comparison of the two alternative procedures for collecting social network data ._ developments in statistics and methodology , p. 121 - 129 ._ ( a. ferligoj , a. kramberger , editors ) metodoloki zvezki 9 , fdv , ljubljana , 1993 , breiger , r. , boorman , s. , arabie , p. an algorithm for clustering relational data with applications to social network analysis and comparison with multidimensional scaling ._ j. math ._ * 12 , * 328 - 383 ( 1975 ) .zachary , w. w. an information flow model for conflict and fission in small groups . _ j. anthropol .* 33 , * 452 - 473 ( 1977 ) .west , d.b ._ introduction to graph theory . _( prentice - hall , 2 edition , nj , 2001 ) .leskovec , j. , lang , k. , dasgupta , a. , mahoney , m. community structure in large networks : natural cluster sizes and the absence of large well - defined clusters ._ arxiv.org:0810.1355_ , ( 2008 ) .albert , r. , jeong , h. , barabsi , a .-diameter of the world wide web ._ nature _ , * 401 , * 130 ( 1999 ) .gleiser , p. , danon , l. community structure in jazz .complex syst ._ * 6 * , 565 ( 2003 ) .barabsi , a .-albert , r. , emergence of scaling in random networks ._ science _ * 286 , * 509 ( 1999 ) .newman , m.e.j . scientific collaboration networks .shortest paths , weighted networks , and centrality .* 64 * , 016132 ( 2001 ) .newman , m.e.j .finding community structure in networks using the eigenvectors of matrices .* 74 , * 036104 ( 2006 ) .guimera , r. , danon , l. , daz - guilera , a. , giralt , f. , arenas , a. self - similar community structure in a network of human interactions .e _ * 68 , * 065103(r ) ( 2003 ) .klimmt , b. , yang , y. introducing the enron corpus ._ ceas conference _ ( 2004 ) .leskovec , j. , kleinberg , j. , faloutsos , c .graph evolution : densification and shrinking diameters ._ acm sigkdd international conference on knowledge discovery and data mining _ ( kdd ) ( 2005 ) .leskovec , j. , huttenlocher , d. , kleinberg , j. predicting positive and negative links in online social networks ._ www conference _ ( 2010 ) .available online at http://topology.eecs.umich.edu/. electric engineering and computer science department , university of michigan , topology project .colizza , v. , pastor - satorras , r. , vespignani , a. , _ nat ._ * 3 , * 276 - 282 ( 2007 ) .watts , d .-j . , strogatz , s .- h .collective dynamics of small - world networks ._ nature _ * 393 , * 440 - 442 ( 1998 ) . milo , r. , itzkovitz , s. , kashtan , n . , levitt , r. , shen - orr , s. , ayzenshtat , i. , sheffer , m. , alon , u. superfamilies of evolved and designed networks ._ science _ * 303 , * 1548 - 1542 ( 2004 ) .available online at http://vlado.fmf.uni - lj.si / pub / networks / data/. nelson , d. l. , mcevoy , c. l. , schreiber , t. a. the university of south florida word association , rhyme , and word fragment norms . http://www.usf.edu/freeassociation/ ( 1998 ) .bogua , m. , pastor - satorras , r. , daz - guilera , a. , arenas , a. models of social networks based on social distance attachment .e _ , * 70 * , 056122 ( 2004 ) .leskovec , j. , adamic , l. , adamic , b. the dynamics of viral marketing ._ acm trans . on the web ( acm tweb )_ , * 1 , * ( 2007 ) .richardson , m. , agrawal , r. , domingos , p. trust management for the semantic web .proceedings of iswc , ( 2003 ) .ripeanu , m. , foster , i. , iamnitchi , a. mapping the gnutella network : properties of large - scale peer - to - peer systems and implications for system design ._ ieee internet computing journal _ , ( 2002 ) .adamic , l.a . , glance , n. the political blogosphere and the 2004 us election ._ proceedings of the www-2005 workshop on the weblogging ecosystem _ ( 2005 ) .available online at http://www.orgnet.com/ | spectral centrality measures allow to identify influential individuals in social groups , to rank web pages by their popularity , and even to determine the impact of scientific researches . the centrality score of a node within a network crucially depends on the entire pattern of connections , so that the usual approach is to compute the node centralities once the network structure is assigned . we face here with the inverse problem , that is , we study how to modify the centrality scores of the nodes by acting on the structure of a given network . we prove that there exist particular subsets of nodes , called controlling sets , which can assign any prescribed set of centrality values to all the nodes of a graph , by cooperatively tuning the weights of their out - going links . we show that many large networks from the real world have surprisingly small controlling sets , containing even less than of the nodes . these results suggest that rankings obtained from spectral centrality measures have to be considered with extreme care , since they can be easily controlled and even manipulated by a small group of nodes acting in a coordinate way . modelling social , biological and information - technology systems as complex networks has proven to be a successful approach to understand their function . among the various aspects of networks which have been investigated so far , the issue of _ centrality _ , and the related problem of identifying the central elements in a network , has remained pivotal since its first introduction . the idea of centrality was initially proposed in the context of social systems , where it was assumed a relation between the location of an individual in the network and its influence and power in group processes . since then , various _ centrality measures _ have been introduced over the years to rank the nodes of a graph according to their topological importance . centrality has found many applications in social systems , in biology and in man - made spatial networks . among the various measures of centrality , such as those based on counting the first neighbours of a node ( degree centrality ) , or the number of shortest paths passing through a node ( betweenness centrality ) , a particularly important class of measures are those based on the spectral properties of the graph . spectral centrality measures include the _ eigenvector centrality _ , the _ alpha centrality _ , _ katz s centrality _ and _ pagerank _ , and are often associated to simple dynamics taking place over the network , such as various kinds of random walks . as representative of the class of spectral centralities , we focus here on eigenvector centrality , which is based on the idea that the importance of a node is recursively related to the importance of the nodes pointing to it . given an unweighted directed graph with nodes and links , described by the adjacency matrix , the eigenvector centrality of is defined as the eigenvector of associated to the largest eigenvalue , which in formula reads . if the graph is strongly connected , then the perron - frobenius theorem guarantees that is unique and positive . therefore , can be normalised such that the sum of the components equals 1 , and the value of the -th component represents the centrality score of node , i.e. the fraction of the total centrality associated to node . in this article we show how to change the eigenvector centrality scores of all the nodes of a graph by performing only local changes at node level . as a first step ( see the methods section ) we have proved that , given any arbitrary positive vector , , and , it is _ always _ possible to assign _ the weights of all the links _ of a strongly connected graph and to construct a new weighted network , with the same topology as and with eigenvector centrality equal to : where is the weighted adjacency matrix of . this is illustrated in fig . [ fig : smallgraph ] for a graph with nodes and links . in the original unweighted graph , node is the node with the highest eigenvector centrality , followed in order by node , node , and node . now , if we have the possibility of tuning the weights of each of the five links , we can set any centrality value to the nodes of the graph . in figure we show , for instance , how to fix the weights of the five links in order to construct : _ i ) _ a weighted network in which all nodes have the same centrality score , and _ ii ) _ even a weighted network in which the centrality ranking is totally reversed with respect to the ranking in . as shown in the example , given a graph , by controlling the weights of all the links , it is always possible to set any arbitrary vector as the eigenvector centrality of the graph . however , tuning the weights of all the links of a given network is practically unfeasible , especially in large systems . fortunately , this is not necessary , either . in fact , in the case of fig . [ fig : smallgraph ] , a weighted graph with all nodes having the same centrality score can also be obtained by changing the weights of only four links , while leaving unchanged the weight of the link from node 1 to node 2 . more in general , it can be proved that the eigenvector centrality of the whole network can be controlled by appropriately tuning the weights of just of the links . the only constraint is that the links must belong to a subset such that , for every node , there is a link _ pointing to _ ( see methods section ) . this is illustrated in fig . [ fig : real_social ] for three real social networks . in each of the three cases , it is possible to set any arbitrary eigenvector centrality by changing only the weights of the red arcs , while keeping unchanged ( and equal to ) the weights of all remaining arcs , shown in yellow . the nodes from which the links in originate are also coloured in red , and are referred to as a _ controlling set _ of the network ( see methods section ) . what is striking is that , in each of the three networks , the set can be chosen in such a way that all the links in originate from a relatively small subset of nodes . for instance , the controlling set reported for the student government network of the university of ljubljana contains only two nodes . this is also a _ minimum controlling set _ , since the graph does not admit another controlling set with a smaller number of nodes . this finding indicates that only two members of the student government , namely node 2 and node 8 , can in principle set the centrality of all the other members by concurrently modifying the weights of some of their links . it is in fact reasonable to assume that the weight of the directed link from to , representing in this case the social credit ( in terms of reputation , esteem or leadership acknowledgement ) given by individual to individual , can be strengthen or decreased _ only _ by . consequently , nodes 2 and 8 can modify at their will the weights of their out - going links and , if these changes are opportunely coordinated , they can largely alter the actual roles of all the other individuals . analogously , only five monks can control the centrality of the sampson s monk network , while only 4 members of the zachary s karate club network can set the eigenvector centrality of the remaining 30 members . a question of practical interest is to investigate the size of the minimum controlling set in various complex networks . when is small with respect to , then the centrality of the network is easy to control . conversely , when the number of nodes in the minimum controlling set is large , the network is more robust with respect to centrality manipulations . we have used two greedy algorithms to compute approximations of minimum controlling sets in various real systems ( see methods section ) . in table [ table ] we report the best approximation for , i.e. the size of the smallest controlling set produced by either of the two algorithms in networks whose sizes range from hundreds to millions of nodes . in the majority of the cases we have found unexpectedly small controlling sets , containing only up to of the nodes of the network . for instance , in the graph of jazz musicians , there exists a controlling set made by just of the musicians . these individuals alone can , in principle , decide to set the popularity of all the other musicians , enhancing the centrality of some of the nodes and decreasing the centrality of others , just by playing more or less often with some of their first neighbours . among all the networks we have considered , the one with the smallest controlling set is the wikipedia talk communication network , a graph with 2,394,385 nodes in which just of nodes are able to alter the centrality of the entire system . the quantities in parenthesis indicate that for this network a set of just of the nodes can control the centrality of of the nodes . [ cols="<,^,^,^,^",options="header " , ] for each real network , we have also computed the typical size of the minimum controlling set in its randomised counterpart ( see the rightmost column in table [ table ] ) . in particular , we have considered a randomisation which preserves the degree sequence of the original graph . in most of the cases , relevant exceptions being some spatial man - made networks , such as power grids , road networks and electronic circuits , and also the patents citation network . this fact suggests that , in the absence of other limitations , such as strong spatial / geographic constraints , the structure of real networks has naturally evolved to favour the control of spectral centrality by a small group of nodes . to better compare the controllability of networks with different sizes , we report in fig . [ fig : crand ] the ratio as a function of the number of graph nodes . the smallest values of the ratio are found for collaboration / communication systems , www and socio - economical networks . the five most controllable networks are respectively wiki - talk , internet at the as level , movie actors , the stanford world wide web , and the collaboration network of researchers in astrophysics . these are all networks in which single nodes can tune , at their will , the weights of their out - going links . a scientist can decide whether to weaken or strengthen the connections to some of the collaborators . the administrators of an internet autonomous system can control the routing of traffic through neighbouring ass , by modifying peering agreements . and , similarly , the owner of a web page can change the weights of hyperlinks , for instance by assigning them different sizes , colour , shapes and positions in the web page . , the ratio between the sizes of the minimum controlling set in real networks and in their respective randomized versions ( we have considered averages over 100 different realizations ) . different symbols and colors refer to the six network classes considered in table [ table ] . the observed ratio is lower than 1 in most of the cases , with the smallest values corresponding to collaboration / communication systems , www and socio - economical networks . the ratio is equal to 1 in three cases . the networks with ratio larger than 1 , with the exception of one socio - economical system ( namely epinions ) , are all spatially constrained systems : three electronic circuits , the us power grid , and three road networks . ] in this work , we have shown how a small number of entities , working cooperatively , can set any arbitrary eigenvector centrality for all the nodes of a real complex network . it is straightforward to extend our results to other spectral centralities , such as -centrality and katz s centrality . similar arguments can also be applied , with some limitations , to pagerank : in this case , the inverse centrality problem has solutions only for some particular choices of . such findings suggest that rankings obtained from centrality measures should be taken into account with extreme care , since they can be easily controlled and even distorted by a small group of cooperating nodes . the high controllability of real networks potentially has large social and commercial impact , given that centrality measures are nowadays extensively used to identify key actors , to rank web pages , and also to assess the value of a scientific research . |
the main properties that should be satisfied by any jet definition were already pointed out almost twenty years ago : the jet algorithm should be 1 ) simple to implement both in experimental analysis and theoretical calculations , 2 ) well defined and yielding a finite cross section at any order of perturbation theory , 3 ) relatively insensitive to hadronization .many different jet definitions have been proposed in recent years but it turns out that some of them do not strictly meet the above features .this could in principle lead to serious problems especially when infrared(ir)-safety ( i.e. , the second item in the above list ) is not correctly implemented , since in this case the matching with fixed - order theoretical results would be spoiled and the whole jet - finding procedure would heavily depend on non - perturbative effects ( hadronization , underlying event , pile - up ) .we can group jet algorithms in two broad classes : iterative cone ( ic ) and sequential recombination ( sr ) .ic algorithms cluster particles according to their relative distance in _ coordinate - space _ and have been extensively used at past lepton and hadron colliders .sr algorithms cluster particles according to their relative distance in _ momentum - space _ and are somehow preferred by theorists since they rigorously take into account ir - safety . inwhat follows i will try to give an overview of recent developments obtained in both the ic and sr classes . for a complete and extensive treatment of these and other aspects of jet algorithmsi refer the reader to the recent literature on the subject .the core structure of any ic algorithm can be described as follows : choose a _ seed _ particle _ i _ , sum the momenta of all particles _ j _ within a cone of radius _ r _ ( in _ y _ and ) around _ i _, take the direction of this sum as a new seed , and repeat until the cone is stable ( i.e. , the direction of the _ n_-th seed coincides with the direction of the _ ( n-1)_-th ) .this procedure , however , may eventually lead to find multiple stable cones sharing the same particles , i.e. the cones may overlap .the problem can be solved either by preventing the overlap ( progressive removal ( ic - pr ) algorithms ) or through a splitting procedure ( split - merge ( ic - sm ) algorithms ) .the ic - pr algorithm starts the iteration from the particle with largest- and , once a stable cone is found , it removes all particles inside it .then the procedure starts again with the hardest remaining particle and go on until no particles are left .this algorithm is also known as ua1-cone , since it was first introduced and extensively used by the cern ua1 collaboration .it is quite easy to see that this algorithm is ir(collinear)-unsafe .assume that the hardest particle undergoes a collinear splitting with : in this case the ic - pr algorithm would lead to a different number / configuration of jets , since the -ordering has been modified by the collinear emission .the ic - sm algorithm does not rely on any particular ordering instead .once all stable cones have been found the prescription is to _ merge _ a pair of cones if more than a fraction ( typically =0.5 - 0.7 ) of the softer s cone is in common with the harder one , or to _ assign _ the shared particles to the closer cone .the _ split - merge _ procedure is repeated until there are no overlapping cones left . unfortunately the ic - sm algorithm is ir(soft)-unsafe as well .assume that two stable cones are generated starting from two hard partons whose relative distance is between r and 2r : the addition of an extra soft particle in between would act as a new seed and the third stable cone would be found , again leading to a different number / configuration of jets .a partial solution to this problem was provided by the _ midpoint algorithm _ : after all possible jets have been found , run the algorithm again using additional _ midpoint _ seeds between each pair of stable cones .this prescription fixes the ir(soft ) issue of the ic - sm algorithm , since the result is now not dependent on the presence of an extra soft seed in the overlap region , and has been adopted as a recommandation for run ii of the tevatron . recently it has been pointed outthat , for particular configurations involving more than two partons , the midpoint algorithm is not able to find all stable cones : for exclusive quantities and/or multi - jet configurations , the midpoint prescription is still ir(soft)-unsafe .the ir issue is definitely solved by the introduction of seedless algorithms , first proposed in .the idea is to identify all possbile subsets of particles in an event and , for each subset , check if the cone defined by the azimuth and rapidity of the total momentum of contains other particles outside the subset : if this is not the case , then defines a stable cone . with this prescriptionthe jet - finding algorithm is infrared safe at all perturbative orders : the main drawback is that the clustering time ( ) ) leads to extremely slow performances for 4 - 5. the seedless algorithm has been recently improved by the sis - cone ( seedless infrared safe cone ) implementation , in which the clustering time is sensibly lowered ( , comparable to midpoint ) .the switching from the midpoint to the seedless cone is expected to have a significant impact only on exclusive quantities ( i.e. , jet mass distribution in multi - jet events ) , while the impact for inclusive observables should be modest since midpoint ir - unsafety only appears at relatively high orders in perturbation theory .the sr algorithm starts with the introduction of a distance between particles ( where is their distance in the plane and their transverse - momentum ) , and the distance between the particle and the beam .if then merge and , otherwise call a jet and remove it from the iteration . there are different types of sr algorithms , depending on the value of the integer in the definition of the distances : identifies the _inclusive- _ algorithm , defines the _ cambridge - aachen _algorithm , while for we have the recently proposed _algorithm .all the sr prescriptions are rigorously ir - safe : any soft parton is first merged with the closest hard parton and only at this point the decision about the merging of two jets is taken , depending exclusively on their opening angle. moreover , there are no more overlap problems , since any parton is inequivocally assigned to only one jet . a very fast implementation( clustering time ) of all the above sr algorithms is available ( fastjet ) : it also includes an interface for the algorithms belonging to the ic class .another public code providing access to both sr and ic algorithms is spartyjet .the issue of ir - safety of a jet algorithm should be seriously taken into account , since multi - jet configurations are sensitive to it and will be far more widespread at the lhc than at previous colliders .in addition , without an ir - safe prescription , it would not be possible to fully exploit the results provided by the theoretical community involved in nlo multi - leg calculations . several _ fast _ and _ safe _ algorithms ( sis - cone and the sr class ) are now available in public packages , but no definite advantages for a particular algorithm over the others have been found up to now : the use of different prescriptions for physics analysis and a continuous cross - checking of results is thus recommended especially for events with high jet - multiplicity at the lhc. s. d. ellis , z. kunszt and d. e. soper , phys .d * 40 * , 2188 ( 1989 ) .j. e. huth _ et al ._ , fermilab - conf-90 - 249-e ( 1990 ) . s. d. ellis , j. huston , k. hatakeyama , p. loch and m. tonnesmann , prog . part. nucl .phys . *60 * , 484 ( 2008 ) , and references therein .c. buttar __ , arxiv:0803.0678 [ hep - ph ] , and references therein. g. arnison _ et al . _[ ua1 collaboration ] , phys .b * 123 * , 115 ( 1983 ) .g. arnison _ et al . _[ ua1 collaboration ] , phys .b * 132 * , 214 ( 1983 ) .m. h. seymour , nucl .b * 513 * , 269 ( 1998 ) .s. d. ellis , private communication to the opal collaboration ; d. e. soper and h. c. yang , private communication to the opal collaboration ; l. a. del pozo , university of cambridge phd thesis , ralt-002 , ( 1993 ) ; r. akers _ et al ._ , opal collaboration , z. phys .c * 63 * , 197 ( 1994 ) .g. c. blazey _ et al ._ , arxiv : hep - ex/0005012 . g. p. salam and g. soyez , jhep * 0705 * , 086 ( 2007 ) .s. catani , y. l. dokshitzer , m. h. seymour and b. r. webber , nucl .b * 406 * , 187 ( 1993 ) .s. d. ellis and d. e. soper , phys .d * 48 * , 3160 ( 1993 ) .y. l. dokshitzer , g. d. leder , s. moretti and b. r. webber , jhep * 9708 * , 001 ( 1997 ) . m. wobisch and t. wengler , arxiv : hep - ph/9907280 .m. cacciari , g. p. salam and g. soyez , jhep * 0804 * , 063 ( 2008 ) .m. cacciari and g. p. salam , phys .b * 641 * , 57 ( 2006 ) ; http://www.lpthe.jussieu.fr//fastjet/ http://www.pa.msu.edu//spartyjet/spartyjet.html . | i provide a very brief overview of recent developments in jet algorithms , mostly focusing on the issue of infrared - safety . [ 1999/12/01 v1.4c il nuovo cimento ] |
aspirations for safety and security of human lives against crimes and natural disasters motivate us to establish smart monitoring systems to monitor surrounding environment . in thisregard , vision sensors are expected as powerful sensing components since they provide rich information about the outer world . indeed , visual monitoring systems have been already commoditized and are working in practice . typically , in the current systems , various decision - making and situation awareness processes are conducted at a monitoring center by human operator(s ) , and partial distributed computing at each sensor is , if at all , done independently of the other sensors . however , as the image stream increases , it is desired to distribute the entire process to each sensor while achieving total optimization through cooperation among sensors .distributed processing over the visual sensor networks is actively studied in recent years motivated by a variety of application scenarios . among them , several papers address optimal monitoring of the environment assuming mobility of the vision sensors , where it is required for the network to ensure the best view of a changing environment .the problem is related to coverage control , whose objective is to deploy mobile sensors efficiently in a distributed fashion . a typical approach to coverage controlis to employ the gradient descent algorithm for an appropriately designed aggregate objective function .the objective function is usually formulated by integrating the product of a sensing performance function of a point and a density function indicating the relative importance of the point .the approach is also applied to visual coverage in .the state of the art of coverage control is compactly summarized in , and a survey of related works in the computer vision society is found in . in this paper , we consider a visual coverage problem under the situation where vision sensors with controllable orientations are distributed over the 3-d space to monitor 2-d environment . in the case , the control variables i.e. the rotation matrices must be constrained on the lie group , which distinguishes the present paper from the works on 2-d coverage . on the other hand , consider situations similar to this paper . take game theoretic approaches which allow the network to achieve globally optimal coverage with high probability but instead the convergence speed tends to be slower than the standard gradient descent approach .in contrast , employs the gradient approach by introducing a local parameterization of the rotation matrix and regarding the problem as optimization on a vector space .this paper approaches the problem differently from .we directly formulate the problem as optimization on and apply the gradient descent algorithm on matrix manifolds .this approach will be shown to allow one to parametrize the control law for a variety of underactuations imposed by the hardware constraints .this paper also addresses density estimation from acquired data , which is investigated in for 2-d coverage .however , we need to take account of the following characteristics of vision sensors : ( i ) the sensing process inherently includes projection of 3-d world onto 2-d image , and ( ii ) explicit physical data is not provided . to reflect ( i ) , we incorporate the projection into the optimization problem on the embedding manifold of . the issue ( ii ) is addressed technologically , where we present the entire process including image processing and curve fitting techniques . finally , we demonstrate the utility of the present coverage control strategy through simulation of moving objects monitoring .let us consider a riemannian manifold whose tangent space at is denoted by , and the corresponding riemannian metric , an smooth inner product , defined over is denoted by .now , we introduce a smooth scalar field defined over the manifold , and the derivative of at an element in the direction , denoted by ] is defined by = \left.\frac{d f(\gamma(t))}{dt}\right|_{t = 0},\ ] ] where is a smooth curve such that .in particular , when is a linear manifold with , the derivative ] .then , a large imposes a heavy penalty on viewing distant area and a small a light penalty on it .in particular , when for some , ( [ eqn : perf1 ] ) is rewritten as once the density function is given , the goal is reduced to minimization of ( [ eqn : obj_single ] ) with ( [ eqn : perf2 ] ) under the restriction of . in order to solve the problem , this paper takes the gradient descent approach which is a standard approach to coverage control . for this purpose , it is convenient to define an extension such that if .we first extend the domain of in ( [ eqn : q_wl ] ) from to as then , the vector is not always on the environment when but the function in ( [ eqn : perf1 ] ) is well - defined even if the domain is altered from to .we thus denote the function with the domain by , and define the composite function relative to and .,width=226 ] we next focus on the term in ( [ eqn : obj_single ] ) and expand the domain of the composite function from to . here ,since is not always on , we need to design such that if . in this paper, we assign to a point the density of a point where the operations are illustrated in fig .[ fig : proj4 ] .accordingly , the density function is defined by remark that , differently from , the function is not naturally extended and the selection of is not unique . the motivation to choose ( [ eqn : perf6 ] ) will be clear in the next subsection .consequently , we define the extended objective function from to by using ( [ eqn : perf5 ] ) and ( [ eqn : perf6 ] ) .let us finally emphasize that holds for any .in the gradient descent approach , we update the rotation in the direction of }^{so(3 ) } h_i ] , given a rotation \in so(3) ] is given by }^{so(3 ) } h_i \!\!&\!\!=\!\!&\!\ !p_{{r_{wi}}[k]}\left({{\rm grad}}_{{r_{wi}}[k]}^{{{\mathbb r}}^{3\times 3 } } \bar h_i\right ) , \label{eqn : proj2}\\ { { \rm grad}}_{{r_{wi}}[k]}^{{{\mathbb r}}^{3\times 3 } } \bar h_i \!\!&\!\!=\!\!&\!\ ! \tilde \delta_i \eta_i^t({r_{wi}}[k ] ) p_{il}^t , \nonumber\end{aligned}\ ] ] where see appendix [ app:1 ] .namely , just running the dynamics leads to the set of critical points of .however , in practice , the vision data is usually obtained at discrete time instants and hence we approximate the continuous - time algorithm ( [ eqn : grad_descent1 ] ) by = { r_{wi}}[k]{\rm exp}\left({r_{wi}}^t[k ] \left(\alpha_k { { \rm grad}}_{{r_{wi}}[k]}^{so(3 ) } h_i\right)\right ) .\label{eqn : grad_descent2}\end{aligned}\ ] ] see for the details on the selection of . in the above discussion , we assume that the sensor can take full 3-d rotational motion . however , the motion of many commoditized cameras is restricted by the actuator configurations .hereafter , we suppose that the sensor can be rotated around two axes ( ) and ( ) , where these vectors are defined in and assumed to be linearly independent of each other .these axes may depend on the rotation matrix .for example , in the case of pan - tilt ( pt ) cameras in figs .[ fig : pan ] and [ fig : tilt ] , which are typical commoditized cameras , the axis of the pan motion ( fig .[ fig : pan ] ) is fixed relative to , while that of the tilt motion ( fig .[ fig : tilt ] ) is fixed relative to the sensor frame .then , only one of the two axes depends on .note that even when there is only one axis around which the sensor can be rotated , the subsequent discussions are valid just letting .let us denote a normalized vector ( ) orthogonal to the -plane .then , the three vectors , and span .thus , any element of can be represented in the form of .now , we define a _ distribution _ assigning to the subspace whose dimension is .the distribution is clearly regular and hence induces a submanifold of , called integral manifold , such that its tangent space at is equal to ( [ eqn : submanifold ] ) .the manifold specifies orientations which the camera can take . since is a submanifold of , a strategy similar to theorem 1 is available and we have the following corollary .suppose that the objective function is formulated by ( [ eqn : obj_single_fict ] ) with ( [ eqn : perf5 ] ) , ( [ eqn : perf6 ] ) and ( [ eqn : mix_gauss ] ) .then , the gradient }^{{\mathcal s}_{ua } } h_i ] on is utilized as it is and we need only to project it through ( [ eqn : proj4 ] ) .also , the projection ( [ eqn : proj4 ] ) is successfully parameterized by the vectors and .in this section , we extend the result of the previous section to the multi - sensor case .the difference from the single sensor case stems from the overlaps of the fovs with the other sensors as illustrated in fig .[ fig : overlap ] . present sensing performance functions taking account of the overlaps and their gradient decent laws. however , in this paper , we present another simpler scheme to manage the overlap .let us first define the set of sensors capturing a point within the fov as where .we also suppose that , when has multiple elements for some , only the data of the sensor with the minimal sensing performance ( [ eqn : perf1 ] ) among sensors in is employed in higher - level decisions and recognitions .this motivates us to partition into the two region then , what pixel captures a point in is identified with what it captures a point outside of , whose cost is set as in the previous section , in the sense that both of the data are not useful at all .this is reflected by assigning to the pixels with .accordingly , we formulate the function to be minimized by as with remark that ( [ eqn : obj_multi ] ) differs from ( [ eqn : obj_single ] ) only in the set . strictly speaking , to compute the gradient of ( [ eqn : obj_multi ] ) , we need to expand from to . for this purpose , it is sufficient to define from to a subset of .for example , an option is to define an extension of ( [ eqn : arxiv1 ] ) similarly to ( [ eqn : arxiv2 ] ) , and to let be the convex full of these points .however , at the time instants computing the gradient with ] instead of its extension .hence , the gradient is simply given as theorem 1 by just replacing by .note that the curve fitting process is run without taking account of whether or not , and is assigned to at the formulation of as in ( [ eqn : mix_gauss ] ) .this is because letting at the curve fitting stage would degrade the density estimation accuracy at around the boundary of .the remaining issue is efficient computation of the set .hereafter , we assume that each sensor acquires , i.e. and for all , and its index through ( all - to - all ) communication or with the help of a centralized computer .the computation under the limited communication will be mentioned at the end of this section .in addition , we suppose that every sensor stores the set for all which can be computed off - line since the sensor positions are fixed .s ( left ) and ( right).,width=151,height=94 ] s ( left ) and ( right).,width=151,height=94 ] s ( left ) and ( right).,width=151,height=94 ] s ( left ) and ( right).,width=151,height=94 ] s ( left ) and ( right).,width=151,height=94 ] s ( left ) and ( right).,width=151,height=94 ] then , the set is computed as in polynomial time with respect to .namely , checking for all provides .the computation process including image processing , curve fitting and gradient computation is successfully distributed to each sensor but the resulting fovs need to be shared among all sensors to compute .a way to implement the present scheme under limited communication is to restrict the fov of each sensor so that the fov can overlap with limited number of fovs of the other sensors .such constraints on the fovs are easily imposed by adding an artificial potential to the objective function but we leave the issue as a future work due to the page constraints .in this section , we demonstrate the utility of the present approach through simulation using 4 cameras with mm . here, we suppose that the view of the environment from with m and focal length mm is given as in fig .[ fig : initial_image ] , and that the mission space is equal to the fov corresponding to the image .since the codes of simulating the image acquisition and processing are never used in experiments , we simplify the process as follows , and demonstrate only the present coverage control scheme with the curve fitting process . before running the simulation , we compute the optical flows for the images of fig . [fig : initial_image ] as in fig .[ fig : of_image ] , and also fitting functions of the data as in fig .[ fig : curve_fit ] .the resulting data is uploaded at http://www.fl.ctrl.titech.ac.jp/paper/2014/data.wmv .then , we segment the image by the superlevel set of the function using a threshold , and assign a boolean variable to if is inside of the set and assign otherwise .the experimental system is now under construction , and the experimental verification of the total process will be conducted in a future work .note however that it is at least confirmed that the skipped image acquisition and processing can be implemented within several milliseconds in a real vision system ..,width=151,height=94 ] .,width=151,height=94 ] .,width=151 ] let the position vectors of cameras be selected as and the length of each side of the image plane be mm and mm .the other elements of are set as illustrated by the mark in fig .[ fig : static ] .the parameters in is set as , , and ) ] by in the sequel . substituting ( [ eqn : perf5 ] ) , ( [ eqn : perf6 ] ) and ( [ eqn : mix_gauss ] ) into ( [ eqn : obj_single_fict ] ), the objective function to be minimized is formulated as from lemma 1 and the fact that is a submanifold of , we first compute the gradient . from definition 1 and ( [ eqn : derivative_lin_m ] ) , we need to compute the directional derivative ] and ] . by calculation, we have hence , = \lim_{t\to 0}\frac{\tilde h^{l}_i(r + \xi t ) - \tilde h^{l}_i(r)}{t} ] .we first have the equations hence , we also have we also obtain where , and are introduced for notational simplicity . using , we can decompose low and high order terms in as ( [ eqn : app4 ] ) is also simplified as where substituting ( [ eqn : app4 ] ) and ( [ eqn : app6 ] ) into ( [ eqn : app1 ] ) yields let us now compute $ ] . substituting ( [ eqn : app7 ] ) into the definition of the directional derivative ( [ eqn : derivative_lin_m ] ) , i.e. \!\!&\!\!=\!\!&\!\!\lim_{t \to 0}\frac{\bar h^{lj}_i(r + \xi t ) - \bar h^{lj}_i(r)}{t } , \label{eqn : app8}\end{aligned}\ ] ] we have = \frac{\|rp_{il}\|_w^2({\bf e}_3^t r p_{il } ) } { ( { \bf e}_3^tr p_{il})^3 } \left(\frac{d h_i^{lj}}{dt}\right)(0 ) \nonumber\\ \hspace{-1.2 cm } & & \hspace{.2 cm } + 2h_i^{lj}(0 ) \frac { ( { \bf e}_3^trp_{il})p_{il}^tr^tw\xi p_{il } -\|rp_{il}\|_w^2{\bf e}_3^t\xi p_{il } } { ( { \bf e}_3^tr p_{il})^3 } \label{eqn : app9}\end{aligned}\ ] ] by calculation , the derivative is given by and hence substituting ( [ eqn : app10 ] ) and definitions of and into ( [ eqn : app9 ] ) yields = \bar \eta_{i}^{lj}(r ) \xi p_{il } \label{eqn : app13}\\ \hspace{-.8cm}&&\eta_{i}^{lj}(r ) = \frac{2e^{-\|b_{lj}\|^2_{\sigma_j } } } { \lambda_i({\bf e}_3^tr p_{il})^3 } \big(({\bf e}_3^tr p_{il})\xi^{lj}_i(r ) -\lambda_i\|rp_{il}\|_w^2{\bf e}_3^t \big ) \nonumber\\ \hspace{-.8cm}&&\xi^{lj}_i(r ) = \|rp_{il}\|_w^2b_{lj}^t \sigma_j ( p_{il } { \bf e}_3^t - \lambda_i i_3)r^t + \lambda_i p_{il}^tr^tw . \nonumber\end{aligned}\ ] ] note that is constant and is independent of the matrix . from ( [ eqn : obj_single_fict3 ] ) , ( [ eqn : modify2 ] ) and ( [ eqn : app13 ] ) , we obtain = \tilde \delta_i \eta_i(r ) \xi p_{il } = { { \rm tr}}\big(\xi^t\big(\tilde \delta_i \eta_i^t(r ) p_{il}^t \big)\big ) , \nonumber\\ \hspace{-.8cm}&&\eta_i(r ) = \sum_{l \in \tilde { { \cal l}}_i^c(r)}w_{il } \bar{\phi } \eta_i^l+\!\ ! \sum_{l \in \tilde { { \cal l}}_i(r)}\!\ ! w_{il } \big ( \bar{\psi } \eta_i^l - \sum_{j=1}^m \alpha_j \eta_{i}^{lj } \big ) .\nonumber\end{aligned}\ ] ] from definition [ def : grad_m ] , we have .combining it with lemma 1 and ( [ eqn : proj_so(3 ) ] ) completes the proof .b. song , c. ding , a. kamal , j. a. farrell and a. roy - chowdhury , `` distributed camera networks : integrated sensing and analysis for wide area scene understanding , '' _ ieee signal processing magazine _ , vol .3 , pp . 2031 , 2011 .t. hatanaka and m. fujita , `` cooperative estimation of averaged 3d moving target object poses via networked visual motion observers , '' _ ieee trans .automatic control _3 , pp . 623638 , 2013 .b. m. schwager , b. j. julian , m. angermann and d. rus , `` eyes in the sky : decentralized control for the deployment of robotic camera networks , '' _ proc . of the ieee _9 , pp . 15411641 , 2011 .t. hatanaka , y. wasa and m. fujita game theoretic cooperative control of ptz visual sensor networks for environmental change monitoring _ proc . of 52nd ieee conf . on decision and control_ , to appear , 2013 j. cortes , s. martinez , and f. bullo , `` spatially - distributed coverage optimization and control with limited - range interactions , '' esaim : control , optimisation & calculus of variations , vol .691719 , 2005 . | this paper investigates coverage control for visual sensor networks based on gradient descent techniques on matrix manifolds . we consider the scenario that networked vision sensors with controllable orientations are distributed over 3-d space to monitor 2-d environment . then , the decision variable must be constrained on the lie group . the contribution of this paper is two folds . the first one is technical , namely we formulate the coverage problem as an optimization problem on without introducing local parameterization like eular angles and directly apply the gradient descent algorithm on the manifold . the second technological contribution is to present not only the coverage control scheme but also the density estimation process including image processing and curve fitting while exemplifying its effectiveness through simulation of moving objects monitoring . |
paraffin films and other surface coatings play an integral part in several emerging technologies that employ vapors of alkali - metal atoms , including atomic magnetometers , clocks , and quantum and nonlinear optical devices .paraffin was first shown to preserve the spin polarization of alkali atoms in 1958 by robinson , ensberg , and dehmelt and was first studied extensively by bouchiat and brossel. it has been investigated by several others in the decades since, but the details of its operation as an anti - relaxation coating remain poorly understood .paraffin coatings enable narrow zeeman resonance linewidths , and they have recently been the subject of renewed interest due to advances in the technology of alkali - metal magnetometers that have led to the development of detectors with sensitivity comparable to or better than superconducting quantum interference devices ( squids ) .modern magnetometers have enabled significant advances in low - magnetic - field nuclear magnetic resonance ( nmr), magnetic resonance imaging ( mri), and medical imaging, as well as paleomagnetism, explosives detection, and ultra - sensitive tests of fundamental physics. paraffin - coated cells also feature narrow hyperfine resonance linewidths and have been explored in the context of secondary frequency standards; they have also been employed in experiments involving spin squeezing, quantum memory, and `` slow light.'' while cells with diameters from a few to tens of centimeters are typically employed , miniature millimeter - sized cells with paraffin coating have also been explored. in addition , similar coatings with silicon head groups have been used in magneto - optical traps, hollow photonic fibers, and noble - gas - atom optical pumping cells. alkali atoms in the vapor phase depolarize upon contact with the bare surface of a glass container , limiting the coherence lifetime of the spin ensemble . in order to prevent such depolarization, vapor cells typically include either a buffer gas or an anti - relaxation surface coating .the inclusion of up to several atmospheres of a chemically inert buffer gas slows diffusion of alkali atoms to the cell walls , but there are several advantages to the use of a surface coating , including lower laser power requirements , larger optical signals , reduced influence of magnetic - field gradients , and a smaller collision rate with other atoms and molecules in the cell .anti - relaxation coatings can allow an alkali atom to experience thousands of collisions with the walls of the cell without depolarizing , and paraffin in particular has been demonstrated to allow up to 10,000 bounces. as the size of the vapor cell decreases , the surface - area - to - volume ratio increases , requiring improvement in the quality of the surface coating to compensate for the resulting increase in the rate of collisions with the surface , in order to maintain the same spin - coherence lifetime . in miniature alkali vapor cells with volume 1 - 10 mm or less , the use of surface coatings enables more sensitive magnetometers and clocks than the use of buffer gas , assuming that a coating with appropriate spin - preservation properties can be employed. a paraffin coating significantly reduces the probability of spin destruction during a collision with the surface because it contains no free electron spins and it features a lower adsorption energy than the bare glass , thus reducing the residence time of an adsorbed alkali atom on the surface of the paraffin relative to the glass surface .the residence time of an atom at a surface site can be expressed as , where is the adsorption energy , is the boltzmann constant , and is the temperature. is the time constant for a perfectly elastic collision and therefore gives the high - temperature limit where the thermal energy .minimal is desirable because an adsorbed alkali atom dephases from the ensemble of atoms in the bulk of the cell , as a result of experiencing both a different magnetic field than in the cell interior and a fluctuating magnetic field generated by the hydrogen nuclei of the paraffin material. the adsorption energy for alkali atoms on a paraffin surface is small , roughly 0.06 - 0.1 ev, and assuming s ( the period of a typical molecular vibration ) gives a residence time of approximately 0.1 ns at room temperature .the performance of paraffin [ c coatings quickly degrades at temperatures above 60 - 80, but operation at higher temperature is beneficial for some devices because it increases the saturated vapor pressure of the alkali atoms , and thus the atomic density . for most types of paraffin , such as tetracontane ( =40 ), the critical temperature corresponds to the melting point, but for longer - chain polyethylene coatings, which have a much higher fusion temperature of 130 , the mechanism for the decreased performance above 60 is not fully understood .recent magnetometers have achieved ultra - high sensitivity better than 1 ft/ to near - dc and radio - frequency magnetic fields by operating at very high vapor density , but the high operating temperatures of these magnetometers ( for cesium vapor and for potassium vapor ) prevent the use of paraffin coatings .in addition , paraffin does not survive the elevated temperatures required by the anodic bonding process used in the production of microfabricated vapor cells .surface coatings with superior temperature stability are therefore required for use with high - density or microfabricated alkali vapor cells .high - temperature coatings also allow experimentation with potassium and sodium vapor , which have lower vapor pressures compared to rubidium and cesium at a given temperature .recent efforts at developing alternatives to paraffin have mainly focused on certain silane coatings that resemble paraffin , containing a long chain of hydrocarbons but also a silicon head group that chemically binds to the glass surface. such materials do not melt and remain attached to the glass surface at relatively high temperatures , enabling them to function as anti - relaxation coatings at much higher temperatures than paraffin . in particular , a multilayer coating of octadecyltrichlorosilane [ ots , ch(ch) ] has been observed to allow from hundreds up to 2100 bounces with the cell walls and can operate in the presence of potassium and rubidium vapor up to about 170. however , the quality of such coatings with respect to preserving alkali polarization is highly variable , even between cells coated in the same batch , and remains significantly worse than that achievable with paraffin .in addition , it was shown recently that an alkene coating , which resembles paraffin except with an unsaturated c = c double bond , can allow up to two orders of magnitude more bounces than paraffin , but only to temperatures of 33, and properties such as stability of the coating remain to be studied .paraffin thus remains the most widely used anti - relaxation coating for alkali spins . in order to facilitate the design and development of new coating materials for alkali - metal cells and hollow fibers , it is therefore necessary to develop a more detailed understanding of the interactions between alkali atoms and the paraffin surface , many aspects of which are not yet fully understood .indeed , the production of high - quality paraffin cells remains more of an art than a science , with little understanding of why only certain coating procedures work or the reasons that variations in those procedures affect the anti - relaxation quality of the coating .as an example , paramagnetic impurities could couple to and depolarize the alkali spins , so some researchers observe that purification of the paraffin by distillation is necessary to produce a high - quality coating; however , others have been able to use paraffins as received. in addition , a so - called `` ripening '' phase is required , which involves annealing of the paraffin - coated cell at 50 - 60 in the presence of the alkali metal for an extended period of time ( typically hours to days ) before the cell may be used , although the specific processes that occur during ripening remain unknown . much of the behavior of the paraffin coating during operation remains equally mysterious .for example , the paraffin coating introduces a small shift in the hyperfine frequency of the atomic ground state , which varies between cells and must be accounted for in the use of alkali vapor cells as frequency standards. in addition , the measured vapor pressure in a paraffin - coated cell is smaller than the expected saturated vapor pressure at the temperature of the cell , implying that alkali atoms can be absorbed into , and thus can diffuse within , the bulk volume of the paraffin. it has been observed that alkali vapor density increases significantly when a paraffin - coated cell is exposed to light ( particularly uv and near - uv light ) due to the light - induced atomic desorption ( liad ) effect, which causes absorbed atoms to be ejected from the paraffin coating .recently , liad effects on alkali spin relaxation have been investigated, with a focus on non - thermal control of alkali vapor density. the exact mechanisms of liad in paraffin are not yet understood ; however , it is clear that a combination of surface processes and light - enhanced diffusion of alkali atoms within the bulk of the coating are important .liad has been used with silane coatings to load photonic fibers, and it has been used with bare pyrex surfaces to load a chip - scale bose - einstein condensate ( bec), although in the latter case the desorption efficiency might be increased by orders of magnitude with a paraffin or silane coating . any coating used for liad loading of an ultra - cold atom chip would require compatibility with ultra - high vacuum . similarly , enhanced alkali - atom density is also observed upon the application of large electric fields ( 1 - 8 kv / cm ) across paraffin - coated cells, although again the exact mechanism remains unknown . in this work we use modern surface science methods as the basis for an investigation of the interaction of alkali - metal atoms with various paraffin materials . unlike most previous studies , which primarily measured properties of the alkali vapor such as density and relaxation time, we instead observe the properties of the coating itself .fourier transform infrared spectroscopy ( ftir ) and differential scanning calorimetry ( dsc ) were used to understand bulk properties of the coating materials , while atomic force microscopy ( afm ) , near edge x - ray absorption fine - structure ( nexafs ) , and x - ray photoelectron spectroscopy ( xps ) were used to obtain information about the coated surface and its interaction with the alkali atoms .these and similar techniques have been employed previously to study paraffins, but not in the context of their use with alkali atoms . in addition , we compared the liad yields of several different paraffin materials .an array of techniques enables a more complete characterization of the atom - surface interactions than can otherwise be achieved .the work described here is intended to demonstrate the power of these methods to thoroughly characterize and help understand the behavior of paraffin and other coating materials , in order to guide the creation of new coatings to enhance the performance of rapidly developing technologies such as microfabricated magnetometers and clocks , nonlinear and quantum optical devices , and portable liad - loaded devices .we selected paraffin waxes that have been successfully implemented as anti - relaxation coatings in alkali vapor cells , including the -alkanes eicosane [ ch(ch) ] , dotriacontane [ ch(ch) ] , and tetracontane [ ch(ch) ] , as well as long - chain polyethylene [ ch(ch) where is large and varies between molecules ] .we also considered a proprietary paraffin from dr .mikhail balabas that is used in the manufacture of magnetometer cells , which we refer to here as pwmb ; this wax is obtained by fractionation of polyethelyene wax at 220. finally , we considered the commercially available waxes fr-130 parowax ( from the luxco wax company ) and paraflint , which are both expected to contain a mixture of various -alkanes . unless otherwise noted , samples were coated following the typical procedures used in the manufacture of alkali vapor cells , which involve evaporating the material at high temperature in vacuum and allowing it to condense on the inner surface of the cell .fourier transform infrared spectroscopy ( ftir ) analysis was performed on several types of paraffin in order to identify the general functional groups present in the waxes .ftir is used extensively for structure determination and chemical analysis because it gives bond - specific information. transitions due to specific bending and stretching modes of bonds absorb infrared light at specific frequencies allowing for identification of the bonds present .spectra were obtained on a varian 640-ir system using a room - temperature dtgs ( deuterated triglycine sulfate ) detector .the aperture of the instrument was completely open , and a nominal 4 spectral resolution was used .the strong optical absorption of the si - o - si structure makes glass substrates opaque to infrared analysis below 2000 . to widen the range of ftir characterization , native oxide terminated , 500 m thick , ( 111)-oriented, low - doped ( phosphorus , 74 - 86 cm resistivity ) silicon wafers ( silicon quest international ) were instead used as substrates for paraffin coating .the silicon phonon absorption occurs below 600 and therefore allows for a more thorough characterization of the various waxes .in addition , the surface of the silicon wafer , as received , contains a very thin layer of oxidized silicon , which is assumed to be similar enough to the surface of glass cells that it does not disturb the behavior of the wax .a si thickness of 500 m or greater is necessary to achieve spectra with 4 resolution without complication due to interference from ghost zero - path difference ( etaloning effect ) peaks .high - resistivity si improves signal to noise by minimizing free - carrier absorption of the ir light .uncoated si samples were first situated in the beam path with the surface plane perpendicular to the light propagation direction , and 256 background spectra of the uncoated silicon wafers were obtained in transmission mode before the waxes were rub - coated onto the warmed surface , giving a layer with thickness of hundreds of nm to several m . to minimize baseline drift ,256 scans were collected in transmission mode under the exact same sample placement as for background spectra collection .absorption and/or si phonon absorption at 600 .,width=302 ] the transmisison spectra for tetracontane , fr-130 , and pwmb are shown in fig .[ fig_ftir ] .the spectrum for tetracontane is in agreement with literature reports : ch and ch stretching modes at 2850 , 2920 , and 2954 , an hch scissor at 1474 , a ch asymmetric bending mode at 1464 , a u+w ( methyl symmetric and methylene wagging ) mode at 1370 , and a methylene rocking doublet at 719 and 729 . similar ftir spectra were obtained for the fr-130 and pwmb paraffins .the peaks contained within broken - line boxes are due to environmental changes that occurred between background and sample collection .the peaks for co at 2300 and 676 may be positive or negative depending on the extent of bench purging with n .the broad peak at 600 is due to the si bulk crystal which can be either positive or negative due to slight changes in sample temperature or angle of incidence within the bench .the periodic undulations in the pwmb spectrum are due to the thickness of the film creating an etalon effect .these results show no observable carbon - carbon double bonds within the sensitivity of the experiment ; specifically , there is no detectable c = c double bond stretching in the range of 1600 - 1700 , and there is no detectable = c - h stretching mode within the range of 3050 - 3100 .differential scanning calorimetry ( dsc ) was used to assess phase transitions associated with crystallinity of the bulk paraffins .dsc measures differences in heat flow into a sample and a reference as a function of temperature as the two are subjected to controlled temperature changes .it can be used to characterize crystallization behavior and assess sample purity by analysis of the heat flow behavior near phase transitions. the dsc scans were obtained on a ta q200 - 1305 advantage instrument .wax samples of 1 - 4 mg were pressed into aluminum pans and the reference was an empty aluminum pan .measurements were made in a nitrogen atmosphere .the typical scanning conditions were temperature scanning at a rate of 10 / min over a range of 10 to 90 - 140 , depending on the melting point of the wax .figure [ fig_dsc1 ] shows the observations for several waxes expected to be crystalline at typical cell operating temperatures below 60 .pure linear - chain alkanes ( including eicosane , dotriacontane , and tetracontane ) all display relatively sharp peaks indicative of the crystalline phase transitions expected for pure -alkanes .purified tetracontane displayed melting and fusion peaks at 79 , as well as a phase transition between orthorhombic and hexagonal crystal structure at 62 - 63. dotriacontane similarly showed distinct melting / fusion and phase transition peaks .the long - chain polyethylene , while not completely monodisperse , showed a relatively sharp melting point extrapolated to 122 and a fusion peak at 118 , as well as a weak endothermic process during heating at 100 with a corresponding exothermic process during cooling at 80 .-alkanes ; these waxes are not expected to be crystalline at typical operating temperatures.,width=302 ] for comparison , fig .[ fig_dsc2 ] displays the dsc scans for several waxes that apparently do not exist in a crystalline state at typical operating temperatures .the fr-130 parowax displayed a relatively sharp melting point at 48 - 51 and phase transition between 25 - 35 , implying that this wax is fairly monodisperse. in contrast , the pwmb wax and paraflint do not display sharp melting or fusion peaks , indicating a lack of homogeneous bulk crystallinity .instead , they feature drawn - out melting and fusion profiles that are consistent with films containing a mixture of various saturated -alkanes; it is possible that these waxes contain branched alkanes as well . in particular , maximum operational temperatures of pwmb wax around 60 are well within the material s phase transition .these results indicate that crystallinity in the bulk is not necessary for an effective anti - relaxation coating , consistent with observations that some working coatings are often partially melted at standard operating temperatures .atomic force microscopy ( afm ) of the surface of coated operational cells and coated silicon surfaces was performed to investigate the surface topography of the paraffins .afm has been employed in the study of organic systems , especially polymeric films on solid inorganic substrates. measurements were taken using a di dimension 3100 scanning probe microscope using silicon probes with a nominal tip radius of 7 nm .images taken of polyethylene are shown in fig .[ fig_afm1 ] .we observe that surfaces of polyethylene and tetracontane melted onto the silicon substrate showed periodic ridges indicative of crystalline surface structure , consistent with the dsc results . in contrast , we show an image in fig .[ fig_afm2 ] of a pwmb - coated glass surface after exposure to rubidium atoms ; this surface was part of an operational magnetometer cell , coated in the standard manner described above , that was broken open for this experiment .this coating does not display any crystalline structure , again consistent with the dsc observations , and it features structures that may indicate either the presence of rubidium clusters on the surface or artifacts of such clusters having reacted upon exposure to air .similar clusters have been observed on silane coatings and are speculated to represent regions of the film with an increased probability of causing alkali depolarization . near edge x - ray absorption fine structure ( nexafs )spectroscopy was used to characterize the molecular bonds present in the paraffin .the nexafs experiments were carried out on the u7a nist / dow materials characterization end station at the national synchrotron light source at brookhaven national laboratory .the x - ray beam was elliptically polarized ( polarization factor of 0.85 ) , with the electric field vector dominantly in the plane of the storage ring .the photon flux was 10 s at a typical storage ring current of 500 ma .a toroidal spherical grating monochromator was used to obtain monochromatic soft x - rays at an energy resolution of 0.2 ev .nexafs spectra were acquired for incident photon energy in the range 225 - 330 ev , which includes the carbon k edge .each measurement was taken on a fresh spot of the sample in order to minimize possible beam damage effects .the partial - electron - yield ( pey ) signal was collected using a channeltron electron multiplier with an adjustable entrance grid bias ( egb ) .all the data reported here are for a grid bias of -150 v. the channeltron pey detector was positioned at an angle of 45 with respect to the incoming x - ray beam and off the equatorial plane of the sample chamber . to eliminate the effect of incident beam intensity fluctuations and monochromator absorption features ,the pey signals were normalized by the incident beam intensity obtained from the photo yield of a clean gold grid located along the path of the x - ray beam .a linear pre - edge baseline was subtracted from the normalized spectra , and the edge jump was arbitrarily set to unity at 320 ev , far above the carbon k - edge , a procedure that enabled comparison of different nexafs spectra for the same number of carbon atoms .energy calibration was performed using a highly oriented pyrolytic graphite ( hopg ) reference sample .the hopg 1s to transition was assigned an energy of 285.3 ev according to the literature value .the simultaneous measurement of a carbon grid ( with a 1s to transition of 285 ev ) allowed the calibration of the photon energy with respect to the hopg sample .charge compensation was carried out by directing low - energy electrons from an electron gun onto the sample surface .spectra are shown in fig .[ fig_nexafs1](a ) for tetracontane melted onto silicon , with one sample having been slowly heated to 220 over the course of about an hour prior to the measurement , featuring characteristic peaks due to a c - h bond near 288 ev and a c c bond near 292.3 ev , in agreement with previous observations. the spectrum for polyethylene deposited on glass , shown in fig .[ fig_nexafs1](b ) , displays these features as well as a small peak near 285.3 ev that is characteristic of a c = c double bond. another experiment consisted of measuring the spectra from pwmb - coated glass slides , which were initially contained within glass cells ; the standard coating procedure described above deposited pwmb on the slides in addition to the inside surface of the cells . in order to avoid exposure to air, the cells were broken open inside a glovebag containing an argon atmosphere , and the slides were then inserted into the load lock of the end station .after initial measurements were taken , the samples were exposed to air for several hours before conducting additional measurements .figure [ fig_nexafs2](a ) shows spectra of the surface of a pwmb sample that had not been exposed to alkali vapor , with the spectra appearing qualitatively similar both before and after exposure to air .in addition to the c - h and c - c peaks , there is a peak at 285.3 ev due to a c = c double bond .these same features are evident in the spectra of a pwmb sample that had been exposed to cesium vapor for extended periods of time , shown in fig .[ fig_nexafs2](b ) .the c = c peak is notably larger for the cesium - exposed sample before exposure to air , and is indeed larger than for the sample that had not been exposed to cesium ; the reason for this is unknown but may be related to the ripening process , with the decrease in height after exposure to air likely due to oxidation , and additional measurements will be necessary to determine if the effect is repeatable .the peaks at approximately 244 and 249 ev are assigned as the third harmonics of the cesium m5 and m4 edges , respectively. the c = c peak is also seen in the spectrum of the fractionated pwmb material melted directly onto glass ( not shown ) . finally , x - ray photoelectron spectroscopy ( xps ) was employed to determine the elemental composition of paraffin samples as well as to identify the chemical states of the elements present .xps is well - suited to a study of atomic vapor cells as it may be used to examine the nature of the alkali - paraffin interactions present in the cell .in addition , angle - resolved xps ( arxps ) offers a means to investigate the distribution of alkali metal in the coating as a function of depth. xp spectra were acquired on a vg scientific escalab2 spectrometer with al k radiation ( = 1486.6 ev ) and a base operating pressure of approximately 10 torr .survey scans were collected with a 1-ev step size , 100-ms dwell time , and 100-ev pass energy .higher - resolution scans collected with a 0.05-ev step size , 100-ms dwell time , and 20-ev pass energy were obtained for the c 1 , rb 3 and 3/3 regions .curve fitting of the core - level xps lines was carried out with casaxps software using a nonlinear shirley background subtraction and gaussian - lorentzian product function .tetracontane ( sigma - aldrich ; .5 purity ) was coated onto piranha - cleaned ( 1:3 = 30 h : h ; ca .80 ; 1 h ) si(100 ) substrates through immersion of 1- wafers into melted paraffin wax at approximately 80 to give a visibly thick coating ( ca .100 m ) .coated samples were placed in a cylindrical pyrex cell with a rb source at one end ( residual pressure torr ) .exposure to rb vapor was accomplished by heating the sealed pyrex cell to about 60 for 48 hours . after trapping excess rb vapor onto a room - temperature cold spot on the cell for 12 hours, the cell was broken open in air , and the samples were immediately transferred into the xps antechamber .light exposure was minimized throughout the process in order to prevent light - induced desorption of the rb atoms from the paraffin film . and3/3 signals.,width=226 ] xp spectra of both unexposed tetracontane and rb - exposed tetracontane were collected .comparison of the survey scans ( fig .[ fig_xps1 ] ) clearly shows the appearance of rb 3 ( 110 ev ) and 3/3 ( 247/238 ev ) signals in the rb - exposed sample .notably , curve fitting of the c 1 ( 284.5 ev ) signal ( fig .[ fig_xps2 ] ) indicates the presence of a single carbon species and suggests that there are no rb - c bonds in this sample .rb on the surface is merely physisorbed and not chemisorbed . /3.,width=226 ] analogous to the rb - exposed tetracontane , samples of cs - exposed pwmb clearly show the appearance of cs signals in the xp spectrum , as shown in fig .[ fig_xps3 ] . however , in contrast to the tetracontane samples , the cs - exposed pwmb carbon peak exhibits an asymmetry that suggests the presence of more than one carbon species . indeed , spectral deconvolution of the carbon region ( fig. [ fig_xps4 ] ) indicates two components , with one at lower binding energy ( ca .281.5 ev ) than the expected c - c / c - h signal at 284.5 ev .this signal is reproducible at different positions on the sample and does not appear in the unexposed pwmb sample , so it is probably not a result of surface charging effects .the lower binding energy component may be assigned to cs - bound carbon and represents an alkali - carbon interaction not present in the rb - exposed tetracontane .we speculate that this bound state arises from a reaction between cs and c = c double bonds present in pwmb ; such a pathway not only agrees with the observed c = c nexafs signal but also points to the importance of c = c double bonds in understanding the alkali - paraffin interactions .in addition to the standard analytical techniques for studying surfaces and bulk materials , such as those described above , alkali vapor cell coatings have in the past typically been studied using more specialized physical methods , such as observations of zeeman and hyperfine lifetimes and frequency shifts .such studies permit direct comparison of the suitability of different coating materials for use in clocks and magnetometers .along these lines , we describe here an experiment that allows comparison of the effect of desorbing light on alkali atoms absorbed into different types of paraffin coatings .the results of such an experiment , combined with the information provided by the other tests described above , can guide the design of coating materials for liad - loaded atomic devices . for this experiment ,we use two different cell geometries .the first geometry features lockable stems to prevent ejected atoms from leaving the main cell body .the lock is implemented using a sliding glass `` bullet '' and permits large vapor density changes to be maintained after repeated exposures to desorbing light ; for more details on the experimental setup and photographs of cells with this lockable stem design , see ref . .all such cells are spherical with diameter of cm , and the stems remain locked at all times .the second geometry features cylindrical cells , approximately 9 cm long and 2 cm in diameter , without stem locks .all cells of either geometry contain rubidium in natural abundance .for the first geometry , the atomic density is determined by monitoring the absorption of a weak ( 1 ) probe laser beam tuned to the transitions of the line . for the second geometry ,the density is determined by quickly sweeping the probe beam across all transitions of the and lines and fitting the measured transmission spectrum at each point in time . in order to induce atomic desorption , the cells are fully illuminated by off - resonant light produced by a 405 nm ( blue ) laser diode .cells are characterized by the liad yield , where is the maximum rb vapor density measured after exposure to desorbing light , and is the initial density prior to illumination . [cols="<,^,^,^",options="header " , ] results of the liad experiment are summarized in table [ table_liad ] , with an error in the determination of of approximately 15% for each cell .the preparation temperature is given at which each material was coated on the inside surface of its cell .for the data shown , desorbing light intensity of 5 mw/ was used with the cells of the first geometry , and intensity of 2 mw/ was used with cells of the second geometry .in addition to paraffin materials , several alkene materials are also considered .the materials labeled `` alkene 80 '' and `` alkene 110 '' are produced by fractionation of alpha olefin fraction c20 - 24 from chevron philips chemical company , which contains a mixture of straight - chain alkenes , with primarily between 18 and 26 carbon atoms ; the number in the material label refers to the temperature of distillation .alkene 80 is the highly effective anti - relaxation coating material described in ref . .enet4160 from gelest , inc .contains 1-triacontene and small amounts of other alkenes with between 26 and 54 carbon atoms .the liad yield is highly dependent on cell geometry and can not be directly compared between samples of different geometries , even when accounting for differing intensity of the desorbing light .samples of pwmb and deuterated polyethylene show appreciable increase in rubidium density after exposure to desorbing light for both cell geometries , as does alkene 110 for the second geometry .it is interesting to note that the nexafs spectra of both pwmb and polyethylene show evidence of unsaturated c = c bonds .although alkene 80 and enet4160 do not display any measurable liad effect , these results nevertheless suggest that covalently bound alkali atoms could act in part as a reservoir for the liad effect .tetracontane also shows a large liad yield , but only for the second experiment .for both cell geometries , deuterated polyethylene gives a larger liad yield when the material is deposited at higher temperature , and it also presents very different dynamical behavior depending on the preparation temperature , as shown in fig .[ fig_liad ] ; after the desorbing light turns off , the vapor density for the sample prepared at 360 is observed to be less than the initial density due to depletion of atoms from the coating. these results are preliminary and motivate a more comprehensive study of cell - to - cell variation and the effects of preparation conditions ( including particularly temperature ) , material deuteration , unsaturated bonds , and other coating parameters .= 0 s and turned off at =45 s.,width=302 ]these measurements provide important details about the properties of effective anti - relaxation coatings and demonstrate the utility of applying surface - science methods .for instance , paraffin does not need to be in crystalline form , as evidenced by the observations of the pwmb wax using dsc and afm .particularly interesting is the detection of c = c double bonds in the pwmb wax using nexafs , a surface technique , while transmission ftir , a bulk measurement technique , does not show any c = c bonds above the detection limit of approximately 5% ; the discrepancy may be due to lack of sensitivity in the ftir measurement , but it is also possible that the molecules with unsaturated bonds represent impurities expelled to the surface of the material and not extant in its bulk .it is unknown if the double bonds are present in the raw material or if they form during fractionation when heated to 220 .alkanes are known to decompose at high temperatures, potentially leading to the presence of unsaturated alkenes in the deposited pwmb surface coating , although the nexafs spectrum of tetracontane shown in fig .[ fig_nexafs1 ] does not display evidence of double - bond formation after heating to 220 .decomposition could explain the observed increase in liad yield for polyethylene materials deposited at higher temperatures .the alkene coating described in ref . also contains double bonds and allows approximately 10 bounces with the surface , significantly more than any other known material .the presence of c = c double bonds in effective anti - relaxation coatings is unexpected because unsaturated bonds increase the polarizability of the surface , and effective coatings have long been assumed to require low polarizability to enable short alkali atom residence time , although these recent results indicate that this assumption may be mistaken .the unsaturated double - bond sites may also react with alkali atoms to form alkali - carbon bonds within the coating material , as detected in the pwmb samples using xps .in fact , it is likely that passivation of the double - bond sites near the surface of the material is a necessary part of the ripening process .we may also compare the dsc observations of the thermal properties of the materials with prior measurements of the temperature dependence of the wall shift in paraffin - coated cells of the rubidium 0 - 0 hyperfine frequency , which is used for frequency reference in atomic clocks .the hyperfine frequency in a cell coated with tetracontane exhibits hysteresis: the frequency changes little until the solid tetracontane is heated to about 80 , at which point it decreases significantly as the wax melts , and the frequency does not increase again upon cooling until the liquid tetracontane reaches a temperature 1 colder than this and resolidifies .this behavior agrees well with the measured temperatures of the melting and fusion peaks of tetracontane .in addition , the temperature dependence of the hyperfine frequency in cells coated with paraflint is observed to change sign at 72, which correlates with the endothermic peak measured with dsc .our observations suggest several avenues for further research .specific surface characteristics such as roughness and fractional coverage can be studied in order to optimize the composition of coating materials and the deposition procedure .the absence or presence of alkali - carbon bonds has significant implications with regard to the mechanism of the ripening and liad processes , and so future work will focus on the effect of unsaturated bonds on both the anti - relaxation effectiveness and liad efficiency of materials .for example , angle - resolved xps studies can reveal depth - dependent changes in the distribution of alkali and alkali - bound species in paraffins before and after treatment with desorbing light .this work can also be extended to include additional techniques for the study of coated surfaces , such as sum frequency generation ( sfg ) spectroscopy and raman spectroscopy; the latter may be particularly useful for its ability to observe the surfaces of intact , operational vapor cells .in conclusion , using modern analytic techniques we show a systematic study of paraffin waxes which can be extended to a broader set of waxes and other anti - relaxation surface coatings to understand the fundamental properties of these coatings .combining a number of different surface science methods gives information regarding the chemical nature of the bulk coating materials as well as the thin films at the surface .this knowledge will inform the design of coatings used with atomic devices . as this research continues, it will hopefully lead to the development of more effective and robust anti - relaxation coatings for use in a variety of alkali - vapor - based technologies under a wide range of operational conditions .the authors thank daniel fischer , kristin schmidt , and ed kramer for assistance with the nexafs measurements , and joel ager , joshua wnuk , david trease , and gwendal kervern for helpful discussions and other assistance .sjs , djm , mhd , ap , and db , the advanced light source , and the dsc , ftir , and afm studies were supported by the director , office of science , office of basic energy sciences , materials sciences division and nuclear science division , of the u.s .department of energy under contract no .de - ac02 - 05ch11231 at lawrence berkeley national laboratory .other parts of this work were funded by nsf / dst grant no .phy-0425916 for u.s .- india cooperative research , by an office of naval research ( onr ) muri grant , and by onr grant n0001409wx21049 . certain commercial equipment , instruments , or materials are identified in this document. such identification does not imply recommendation or endorsement by the u.s .department of energy , lawrence berkeley national laboratory , the advanced light source , or the national institute of standards and technology , nor does it imply that the products identified are necessarily the best available for the purpose .t. karaulanov , m. t. graf , d. english , s. m. rochester , y. rosen , k. tsigutkin , d. budker , e. b. alexandrov , m. v. balabas , d. f. j. kimball , f. a. narducci , s. pustelny , and v. v. yashchuk , phys .a * 79 * , 012902 ( 2009 ) . | many technologies based on cells containing alkali - metal atomic vapor benefit from the use of anti - relaxation surface coatings in order to preserve atomic spin polarization . in particular , paraffin has been used for this purpose for several decades and has been demonstrated to allow an atom to experience up to 10,000 collisions with the walls of its container without depolarizing , but the details of its operation remain poorly understood . we apply modern surface and bulk techniques to the study of paraffin coatings , in order to characterize the properties that enable the effective preservation of alkali spin polarization . these methods include fourier transform infrared spectroscopy , differential scanning calorimetry , atomic force microscopy , near - edge x - ray absorption fine structure spectroscopy , and x - ray photoelectron spectroscopy . we also compare the light - induced atomic desorption yields of several different paraffin materials . experimental results include the determination that crystallinity of the coating material is unnecessary , and the detection of c = c double bonds present within a particular class of effective paraffin coatings . further study should lead to the development of more robust paraffin anti - relaxation coatings , as well as the design and synthesis of new classes of coating materials . |
correlation functions are fundamental objects for statistical analysis , and are thus ubiquitous in most kinds of scientific inquires and their applications . in physics ,correlation functions have an important role for research in areas such as quantum optics and open systems , phase transitions and condensed matter physics , and quantum field theory and nuclear and particle physics .another area in which correlation functions are omnipresent is quantum information science ( qis ) , an interdisciplinary field that extends the applicabilities of the classical theories of information , computation , and computational complexity .investigations about the quantum correlations in physical systems have been one of the main catalyzers for developments in qis .there are several guises of quantum correlations , and quantum discord stands among the most promising quantum resources for fueling the quantum advantage .when computing or witnessing quantum discord , or other kinds of correlation or quantumness quantifiers , we are frequently faced with the need for calculating coherence vectors and correlation matrices . andit is the main aim of this article to provide formulas for these functions that are amenable for more efficient numerical calculations when compared with the direct implementation of their definitions . in order to define coherence vectors and correlation matrices ,let us consider a composite bipartite system with hilbert space .hereafter the corresponding dimensions are denoted by for .in addition , let , with be a basis for the special unitary group .any density operator describing the state of the system can be written in the local basis as follows : where and is the identity operator in .one can readily verify that the components of the coherence ( or bloch s ) vectors and and of the correlation matrix are given by : it is worthwhile mentioning that the mean value of _ any observable _ in , for , can be obtained using these quantities . in https://github.com/jonasmaziero/libforq.git , we provide fortran code to compute the coherence vectors , correlation matrices , and quantum discord quantifiers we deal with here . besides these functions , there are other tools therein that may be of interest to the reader .the instructions on how to use the software are provided in the readme file .related to the content of this section , the subroutine ` bloch_vector_gellmann_unopt(d_{s } , \rho_{s } , \mathbf{s } ) ` returns the coherence vectors or and the subroutine ` corrmat_gellmann_unopt(d_{a } , d_{b } , \rho , c ) ` computes the correlation matrix .now , let us notice that if calculated directly from the equations above , for , the computational complexity ( cc ) to obtain the coherence vectors and or the correlation matrix is : the remainder of this article is structured as follows . in sec .[ coh_vec ] , we obtain formulas for , , and that are amenable for more efficient numerical computations . in sec .[ discord ] we test these formulas by applying them in the calculation of hilbert - schmidt quantum discords . in sec .[ conclusion ] we make some final remarks about the usefulness and possible applications of the results reported here .the partial trace function can be used in order to obtain the reduced states and and to write the components of the bloch vectors in the form : thus , when computing the coherence vectors of the parties and , we shall have to solve a similar problem ; so let s consider it separately .that is to say , we shall regard a generic density operator written as where now , and for the remainder of this article , we assume that the matrix elements of regarded density operator in the standard computational basis are given .we want to compute the _ bloch s vector _ : for the sake of doing that , a particular basis must be chosen . herewe pick the generalized gell mann matrices , which are presented below in three groups : which are named the diagonal , symmetric , and antisymmetric group , respectively .the last two groups possess generators each .any one of these matrices can be obtained by calling the subroutine ` gellmann(d_{s } , g , k , l , \gamma_{(k , l)}^{s(g ) } ) ` .for the first group , , we make and , in this case , one can set to any integer .it is straightforward seeing that , for the generators above , the corresponding components of the bloch s vector can expressed directly in terms of the matrix elements of the density operator as follows : these expressions were implemented in the fortran subroutine ` bloch_vector_gellmann(d_{s } , \rho_{s } , \mathbf{s } ) ` . with this subroutine , and the partial trace function ,we can compute the coherence vectors and . we observe that after these simple algebraic manipulations the computational complexity of the bloch s vector turns out to be basically the cc for the partial trace function .hence , from ref . we have that for , one detail we should keep in mind , when making use of the codes linked to this article , is the convention we apply for the indexes of the components of . for the first group of generators , , naturally , .we continue with the second group of generators , , by setting , , , , , . the same convention is used for the third group of generators , , but here we begin with next we address the computation of the _ correlation matrix _ , which is a matrix that we write in the form : with the sub - matrices given as shown below . for convenience ,we define the auxiliary variables : the matrix elements of , whose dimension is , correspond to the diagonal generators for and diagonal generators for : the matrix elements of , whose dimension is , correspond to the diagonal generators for and symmetric generators for : the matrix elements of , whose dimension is , correspond to the diagonal generators for and antisymmetric generators for : the matrix elements of , whose dimension is , correspond to the symmetric generators for and diagonal generators for : the matrix elements of , whose dimension is , correspond to the symmetric generators for and symmetric generators for : the matrix elements of , whose dimension is , correspond to the symmetric generators for and antisymmetric generators for : the matrix elements of , whose dimension is , correspond to the antisymmetric generators for and diagonal generators for : the matrix elements of , whose dimension is , correspond to the antisymmetric generators for and symmetric generators for : the matrix elements of , whose dimension is , correspond to antisymmetric generators for and antisymmetric generators for : we remark that when implementing these expressions numerically , for the sake of mapping the local to the global computational basis , we utilize , e.g. , the subroutine ` corrmat_gellmann(d_{a } , d_{b } , \rho , c ) ` returns the correlation matrix , as written in eq .( [ eq : corrmat ] ) , associated with the bipartite density operator and computed using the gell mann basis , as described in this section .the convention for the indexes of the matrix elements is defined in the same way as for the coherence vectors . the computational complexity for , computed viathe optimized expressions obtained in this section , is , for , by generating some random density matrices , we checked that the expressions and the corresponding code for the unoptimized and optimized versions of , , and agree .additional tests shall be presented in the next section , where we calculate some quantum discord quantifiers .the calculation of quantum discord functions ( qd ) usually involves hard optimization problems . in the last few years, a large amount of effort have been dedicated towards computing qd analytically , with success being obtained mostly for low dimensional quantum systems .although not meeting all the required properties for a _ bona fide _ qd quantifier , the hilbert - schmidt discord ( hsd ) , is drawing much attention due to its amenability for analytical computations , when compared with most other qd measures . in the last equation , the minimization is performed over the classical - quantum states with being a probability distribution , an orthonormal basis for , generic density operators defined in , and is the hilbert - schmidt norm of the linear operator , with being the transpose conjugate of . in this article , as a basic test for the fortran code provided to obtain coherence vectors and correlation matrices , we shall compute the following lower bound for the hsd : where are the eigenvalues , sorted in non - increasing order , of the matrix : in the equation above stands for the transpose of a vector or matrix .we observe that the other version of the hsd , , can be obtained from the equations above simply by exchanging and and using instead of .it is interesting regarding that , as was proved in ref . , a bipartite state , with polarization vectors and and correlation matrix , is classical - quantum if and only if there exists a -dimensional projector in the space such that : based on this fact , an ameliorated version for the hilbert - schmidt quantum discord ( ahsd ) was proposed : with the matrix defined as where and are arbitrary functions of .then , by setting and using the purity , to address the problem of non - contractivity of the hilbert - schmidt distance , the following analytical formula was presented : thus both discord quantifiers and are , in the end of the day , obtained from the eigenvalues .and the computation of these eigenvalues requires the knowledge of the coherence vector ( or ) and of the correlation matrix .these qd measures were implemented in the fortran functions ` discord_hs(ssys , d_{a } , d_{b } , \rho ) ` and ` discord_hsa(ssys , d_{a } , d_{b } , \rho ) ` , where ` ssys = s ` , with , specifies which version of the quantum discord is to be computed . as an example , let us use the formulas provided in this article and the associated code to compute the hsd and ahsd of werner states in ( with : where $ ] and the reduced states of are , whose purity is .the results for the hsd and ahsd of are presented in fig .[ werner ] . ) .the lines are the corresponding values of the ahsd plotted via the analytical formula : . due to the symmetry of , here in the insetis shown the difference between the times taken by the two methods to compute the ahsd for a fixed value of .we see clearly here that our optimized algorithm gives a exponential speedup against the brute force calculation of the bloch s vectors and correlation matrix . ]in this article , we addressed the problem of computing coherence vectors and correlations matrices .we obtained formulas for these functions that make possible a considerably more efficient numerical implementation when compared with the direct use of their definitions .we provided fortran code to calculate all the quantities regarded in this paper . as a test for our formulas and code , we computed hilbert - schmidt quantum discords of werner states .it is important observing that , although our focus here was in quantum information science , the tools provided can find application in other areas , such as e.g. in the calculation of order parameters and correlations functions for the study of phase transitions in discrete quantum or classical systems .the author declares that the funding mentioned in the acknowledgments section do not lead to any conflict of interest .additionally , the author declares that there is no conflict of interest regarding the publication of this manuscript .this work was supported by the brazilian funding agencies : conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) , processes 441875/2014 - 9 and 303496/2014 - 2 , instituto nacional de cincia e tecnologia de informao quntica ( inct - iq ) , process 2008/57856 - 6 , and coordenao de desenvolvimento de pessoal de nvel superior ( capes ) , process6531/2014 - 08 .i gratefully acknowledges the hospitality of the physics institute and laser spectroscopy group at the universidad de la repblica , uruguay .j. hutchinson , j. p. keating , and f. mezzadri , on relations between one - dimensional quantum and two - dimensional classical spin systems , http://dx.doi.org/10.1155/2015/652026[adv .2015 , 652026 ( 2015 ) ] .m. d. reid , p. d. drummond , w. p. bowen , e. g. cavalcanti , p. k. lam , h. a. bachor , u. l. andersen , and g. leuchs , the einstein - podolsky - rosen paradox : from concepts to applications , http://dx.doi.org/10.1103/revmodphys.81.1727[rev .81 , 1727 ( 2009 ) ] .l. c. cleri , j. maziero , and r. m. serra , theoretical and experimental aspects of quantum discord and related measures , http://dx.doi.org/10.1142/s0219749911008374[int .j. quantum inf .9 , 1837 ( 2011 ) ] .k. modi , a. brodutch , h. cable , t. paterek , and v. vedral , the classical - quantum boundary for correlations : discord and related measures , http://dx.doi.org/10.1103/revmodphys.84.1655[rev .84 , 1655 ( 2012 ) ] .d. o. soares - pinto , r. auccaise , j. maziero , a. gavini - viana , r. m. serra , and l. c. cleri , on the quantumness of correlations in nuclear magnetic resonance , http://dx.doi.org/10.1098/rsta.2011.0364 [ phil .r. soc . a 370 , 4821 ( 2012 ) ] .m. vila , g. h. sun , and a. l. salas - brito , scales of time where the quantum discord allows an efficient execution of the dqc1 algorithm , http://dx.doi.org/10.1155/2014/367905[adv .2014 , 367905 ( 2014 ) ] .m piani , v narasimhachar , and j calsamiglia , quantumness of correlations , quantumness of ensembles and quantum data hiding , http://dx.doi.org/10.1088/1367-2630/16/11/113001[new j. phys .16 , 113001 ( 2014 ) ] .b. dakic , y. o. lipp , x. ma , m. ringbauer , s. kropatschek , s. barz , t. paterek , v. vedral , a. zeilinger , .brukner , and p. walther , quantum discord as resource for remote state preparation , http://dx.doi.org/10.1038/nphys2377[nat .8 , 666 ( 2012 ) ] .d. girolami , a. m. souza , v. giovannetti , t. tufarelli , j. g. filgueiras , r. s. sarthour , d. o. soares - pinto , i. s. oliveira , and g. adesso , quantum discord determines the interferometric power of quantum states , http://dx.doi.org/10.1103/physrevlett.112.210401[phys .112 , 210401 ( 2014 ) ] .t. k. chuan , j. maillard , k. modi , t. paterek , m. paternostro , and m. piani , quantum discord bounds the amount of distributed entanglement , http://dx.doi.org/10.1103/physrevlett.109.070501[phys .109 , 070501 ( 2012 ) ] .r. auccaise , j. maziero , l. c. celeri , d. o. soares - pinto , e. r. deazevedo , t. j. bonagamba , r. s. sarthour , i. s. oliveira , and r. m. serra , experimentally witnessing the quantumness of correlations , http://dx.doi.org/10.1103/physrevlett.107.070501[phys .107 , 070501 ( 2011 ) ] .f. m. paula , i. a. silva , j. d. montealegre , a. m. souza , e. r. deazevedo , r. s. sarthour , a. saguia , i. s. oliveira , d. o. soares - pinto , g. adesso , and m. s. sarandy , observation of environment - induced double sudden transitions in geometric quantum correlations , http://dx.doi.org/10.1103/physrevlett.111.250401[phys .111 , 250401 ( 2013 ) ] .i. a. silva , d. girolami , r. auccaise , r. s. sarthour , i. s. oliveira , t. j. bonagamba , e. r. deazevedo , d. o. soares - pinto , and g. adesso , measuring bipartite quantum correlations of an unknown state , http://dx.doi.org/10.1103/physrevlett.110.140501[phys .110 , 140501 ( 2013 ) ] .g. h. aguilar , o. jimnez faras , j. maziero , r. m. serra , p. h. souto ribeiro , and s. p. walborn , experimental estimate of a classicality witness via a single measurement , http://dx.doi.org/10.1103/physrevlett.108.063601[phys .108 , 063601 ( 2012 ) ] .m. cianciaruso , t. r. bromley , w. roga , r. lo franco , and g. adesso , universal freezing of quantum correlations within the geometric approach , http://dx.doi.org/10.1038/srep10177[sci .rep . 5 , 10177 ( 2015 ) ] .t. r. bromley , m. cianciaruso , r. lo franco , and g. adesso , unifying approach to the quantification of bipartite correlations by bures distance , http://dx.doi.org/10.1088/1751-8113/47/40/405302[j .phys . a : math .47 , 405302 ( 2014 ) ] .b. aaronson , r. lo franco , and g. adesso , comparative investigation of the freezing phenomena for quantum correlations under nondissipative decoherence , http://dx.doi.org/10.1103/physreva.88.012120[phys .a 88 , 012120 ( 2013 ) ] .b. bellomo , g. l. giorgi , f. galve , r. lo franco , g. compagno , and r. zambrini , unified view of correlations using the square norm distance , http://dx.doi.org/10.1103/physreva.85.032104[phys .a 85 , 032104 ( 2012 ) ] .b. bellomo , r. lo franco , and g. compagno , dynamics of geometric and entropic quantifiers of correlations in open quantum systems , http://dx.doi.org/10.1103/physreva.86.012312[phys . rev . a 86 , 012312 ( 2012 ) ] .r. lo franco , b. bellomo , e. andersson , and g. compagno , revival of quantum correlations without system - environment back - action , http://dx.doi.org/10.1103/physreva.85.032318[phys .a 85 , 032318 ( 2012 ) ] .xu , k. sun , c .- f .li , x .- y .xu , g .- c .guo , e. andersson , r. lo franco , and g. compagno , experimental recovery of quantum correlations in absence of system - environment back - action , http://dx.doi.org/10.1038/ncomms3851[nat .commun . 4 , 2851 ( 2013 ) ] .r. lo franco , b. bellomo , s. maniscalco , and g. compagno , dynamics of quantum correlations in two - qubit systems within non - markovian environments , http://dx.doi.org/10.1142/s0217979213450537[int . j. modb 27 , 1345053 ( 2013 ) ] .t. chanda , t. das , d. sadhukhan , a. k. pal , a. sen de , and u. sen , reducing computational complexity of quantum correlations , http://dx.doi.org/10.1103/physreva.92.062301[phys .a 92 , 062301 ( 2015 ) ] .a. beggi , f. buscemi , and p. bordone , analytical expression of genuine tripartite quantum discord for symmetrical x - states , http://dx.doi.org/10.1007/s11128-014-0882-z[quantum inf . process .14 , 573 ( 2015 ) ] .g. li , y. liu , h. tang , x. yin , and z. zhang , analytic expression of quantum correlations in qutrit werner states undergoing local and nonlocal unitary operations , http://dx.doi.org/10.1007/s11128-014-0888-6[quantum inf . process .14 , 559 ( 2015 ) ] .p. c. obando , f. m. paula , and m. s. sarandy , trace - distance correlations for x states and the emergence of the pointer basis in markovian and non - markovian regimes , http://dx.doi.org/10.1103/physreva.92.032307[phys .a 92 , 032307 ( 2015 ) ] . | abstract + coherence vectors and correlation matrices are important functions frequently used in physics . the numerical calculation of these functions directly from their definitions , which involves kronecker products and matrix multiplications , may seem to be a reasonable option . notwithstanding , as we demonstrate in this article , some algebraic manipulations before programming can reduce considerably their computational complexity . besides , we provide fortran code to generate generalized gell mann matrices and to compute the optimized and unoptimized versions of the associated bloch s vectors and correlation matrix , in the case of bipartite quantum systems . as a code test and application example , we consider the calculation of hilbert - schmidt quantum discords . |
of high - quality and real - time image processing algorithms is a key topic in today s context , where humans demand more capabilities and control over their rapidly increasing number of advanced imaging devices such as cameras , smart phones , tablets , etc .many efforts are being made toward satisfying those needs by exploiting dedicated hardware , such as graphics processing units ( gpus ) .some areas where gpus are emerging to improve human computer interaction ( hci ) include cybernetics ( for example , in facial or object recognition , classification using support vector machines and genetic algorithms for clustering ) , and image processing ( e.g. , unsupervised image segmentation , optical coherence tomography systems , efficient surface reconstruction from noisy data , remote sensing , real - time background subtraction , etc . ) . in such hardware - oriented designed algorithms , the computational efficiency of processing tasks is significantly improved by parallelizing the operations .high level tasks such as ( a ) object / person - tracking or ( b ) video - based gestural interfaces often require a foreground segmentation method as a base building block.,title="fig : " ] + ( a ) high level tasks such as ( a ) object / person - tracking or ( b ) video - based gestural interfaces often require a foreground segmentation method as a base building block.,title="fig : " ] + ( b ) automated video analysis applications such as person tracking or video - based gestural interfaces ( see [ fig : applications ] ) rely on lower - level building blocks like foreground segmentation , where the design of efficient computational methods for dedicated hardware has a significant impact .multimodal nonparametric segmentation strategies have drawn a lot of attention since they are able to provide high - quality results even in complex scenarios ( dynamic background , illumination changes , etc . )however , their main drawback is their extremely high computational cost ( requiring the evaluation of billions of multidimensional gaussian kernels per second ) , which makes them difficult to integrate in the latest generation of image processing applications . to overcome this important drawback and achieve real - time performance , the use of parallel hardware such as gpus helps but may not be enough by itself , depending on the required resolution , hence the need for algorithms capable of evaluating non - linear ( e.g. , gaussian ) functions at high speed within a required error tolerance .gpu vendors are aware of this recurrent computing problem and provide hardware implementations of common transcendental functions in special function units ( sfus ) ; indeed , we can find in the literature examples of successful use of these hardware facilities .however , their ease of use comes at the price of non - customizable reduced numerical precision .we propose a novel , fast , and practical method to evaluate any continuous mathematical function within a known interval .our contributions include , based on error equalization , a nearly optimal design of two types of piecewise linear approximations ( linear interpolant and orthogonal projection ) in the norm under the constraint of a large budget of evaluation subintervals . moreover , we provide asymptotically tight bounds for the approximation errors of both piecewise linear representations , improving upon existing ones .specifically , in addition to the convergence rate typical of these approximations , we quantify their interpolation constants : for the linear interpolant and a further improvement factor in case of the orthogonal projection .the obtained error relations are parameterized by the number of segments used to represent the complex ( nonlinear ) function , hence our approach allows the user to estimate the errors given or , conversely , estimate the required to achieve a target approximation error .we also propose an efficient implementation of the technique in a modern gpu by exploiting the fixed - function interpolation routines present in its texture units to accelerate the computation while leaving the rest of the gpu ( and , of course , the cpu ) free to perform other tasks ; this technique is even faster than using sfus .although the initial motivation of the developed method was to improve the efficiency of nonparametric foreground segmentation strategies , it must be noted that it can be also used in many other scientific fields such as computer vision or audio signal processing , where the evaluation of continuous mathematical functions constitutes a significant computational burden .section [ sec : piecewise - linear - approximations ] summarizes the basic facts about piecewise linear approximation of real - valued univariate functions and reviews related work in this topic ; section [ sec : finding - a - non - uniform ] derives a suboptimal partition of the domain of the approximating function in order to minimize the distance to the original function ( proofs of results are given in the appendixes ) ; section [ sec : computational - analysis - and ] analyzes the algorithmic complexity of the proposed approximation strategies and section [ sec : implementation - in - a - gpu ] gives details about their implementation on modern gpus .section [ sec : results ] presents the experimental results of the proposed approximation on several functions ( gaussian , lorentzian and bessel s ) , both in terms of quality and computational times , and its use is demonstrated in an image processing application .finally , section [ sec : conclusions ] concludes the paper .in many applications , rigorous evaluation of complex mathematical functions is not practical because it takes too much computational power .consequently , this evaluation is carried out approximating them by simpler functions such as ( piecewise-)polynomial ones .piecewise linearization has been used as an attractive simplified representation of various complex nonlinear systems .the resulting models fit into well established tools for linear systems and reduce the complexity of finding the inverse of nonlinear functions .they can also be used to obtain approximate solutions in complex nonlinear systems , for example , in mixed - integer linear programming ( milp ) models .some efforts have been devoted as well to the search for canonical representations in one and multiple dimensions with different goals such as black box system identification , approximation or model reduction .previous attempts to address the problem considered here ( optimal function piecewise linearization ) include , the latter two in the context of nonlinear dynamical systems . in an iterative multi - stage procedure based on dynamical programmingis given to provide a solution to the problem on sequences of progressively finer 2-d grids . in the piecewise linear approximationis obtained by using the evolutionary computation approach such as genetic algorithm and evolution strategies ; the resulting model is obtained by minimization of a sampled version of the mean squared error and it may not be continuous . in the problemis addressed using a hybrid approach based on curve fitting , clustering and genetic algorithms . herewe address the problem from a less heuristic point of view , using differential calculus to derive a more principled approach .our method solely relies on standard numerical integration techniques , which takes few seconds to compute , as opposed to recursive partitioning techniques such as , which take significantly longer . in this sectionwe summarize the basic theory behind piecewise linear functions and two such approximations to real - valued functions : interpolation and projection ( sections [ sub : linear - interpolation ] and [ sub : the - l2-projection ] , respectively ) , pictured in [ fig : pipvsproj ] along with their absolute approximation error with respect to .top : fifth - degree polynomial and two continuous piecewise linear ( cpwl ) approximations , the orthogonal projection and the linear interpolant ; bottom : corresponding absolute approximation errors ( magnified by a factor).,width=317 ] hat functions constitute a basis of . since all the basis functions are zero outside ] is a partition of into a set of subintervals , where , and a set of functions , one for each subinterval . in particular we are interested in continuous piecewise linear ( cpwl ) functions , which means that all the are linear and . cpwl functions of a given partition are elements of a vector space : the addition of such functions and/or multiplication by a scalar yields another cpwl function defined over the same subintervals . a useful basis for the vector space is formed by the set of _ hat functions _ or _ nodal basis functions _ , pictured in [ fig : hat - functions - constitute ] and defined in general by the formula \\ ( x - x_{i+1})/(x_{i}-x_{i+1 } ) , & x\in\left[x_{i},x_{i+1}\right]\\ 0 , & x\notin\left[x_{i-1},x_{i+1}\right ] .\end{cases}\ ] ] the basis functions and associated to the boundary nodes and are only _ half hats_. these basis functions are convenient since they can represent any function in by just requiring the values of at its nodal points , , in the form the piecewise linear interpolant of a continuous function over the interval can be defined in terms of the basis just introduced : while this cpwl approximation is trivial to construct , and may be suitable for some uses , it is by no means the best possible one .crucially , for any function that is convex in . depending on the application in which this approximation is used , this property could skew the results .however , as we will see in section [ sec : finding - a - non - uniform ] , the linear interpolant is useful to analyze other possible approximations .it is also at the heart of the trapezoidal rule for numerical integration .let the usual inner product between two square - integrable ( ) functions in the interval be given by then , the vector space can be endowed with the above inner product , yielding an inner product space . as usual , let be the norm induced by the inner product , and let be the distance between two functions .the orthogonal projection of the function onto is the function such that since , it can be expressed in the nodal basis by where the coefficients solve the linear system of equations .the matrix has entries , and vector has entries given by .the gramian matrix is tridiagonal and strictly diagonally dominant .therefore , the system has exactly one solution , which can be obtained efficiently using the thomas algorithm ( a particular case of gaussian elimination ) . is the element in that is closest to in the sense given by the aforementioned distance , as we recall next . for any , where we used property that the vector . next , applying the cauchy - schwarz inequality , and so , i.e. , with equality if .this makes most suitable as an approximation of under the norm .as just established , for a given vector space of cpwl functions , is the function in whose distance to is minimal .however , the approximation error does not take the same value for every possible ; therefore , we would like to find the optimal partition ( or a reasonable approximation thereof ) corresponding to the space in which the approximation error is minimum , this is a difficult optimization problem : in order to properly specify in and measure its distance to , we need to solve the linear system of equations , whose coefficients depend on the partition itself , which makes the problem symbolically intractable .numerical algorithms could be used but it is still a challenging problem .let us examine the analogous problem for the interpolant function defined in section [ sub : linear - interpolation ] .again , we would like to find the optimal partition corresponding to the space in which the approximation error is minimum , albeit not as difficult as the previous problem , because is more straightforward to define , this is still a challenging non - linear optimization problem .fortunately , as it is shown next , a good approximation can be easily found under some asymptotic analysis . in the rest of this sectionwe investigate in detail the error incurred when approximating a function by the interpolant and the orthogonal projection defined in section [ sec : piecewise - linear - approximations ] ( see [ fig : pipvsproj ] ) .then we derive an approximation to the optimal partition ( section [ sub : approximation - to - the - optimal - partition ] ) that serves equally well for and because , as we will show ( section [ sub : error - in - a - single - interval ] ) , their approximation errors are roughly proportional under the assumption of a sufficiently large number of intervals .first , let us consider the error incurred when approximating a function , twice continuously differentiable , by its linear interpolant in an interval ( [ fig : errorzone ] ) .[ thm : lininterpolanterrbound ] the error between a given function and its linear interpolant with , in the interval ] and two linear approximations . on the left , the linear interpolant given by ; on the right , a general linear segment given by is a signed distance with respect to .,width=317 ] let us now characterize the error of the orthogonal projection .we do so in two steps : first we compute the minimum error of a line segment in the interval , and then we use an asymptotic analysis to approximate the error of the orthogonal projection . stemming from , we can write any ( linear ) segment in as where and are extra degrees of freedom ( pictured in [ fig : errorzone ] ) with respect the interpolant that allow the line segment to better approximate the function in . by computing the optimal values of and we obtain the following result. [ thm : minerrlinesegment ] the minimum squared error between a given function and a line segment in an interval ] , of length , is where is the approximately constant value that takes within the interval . see appendix [ app : convergence ] . in the same asymptotic situation, the linear interpolant , ( see ) gives a bigger squared error by a factor of six , now we give a procedure to compute a suboptimal partition of the target interval and then derive error estimates for both and on such a partition and the uniform one .let us consider a partition of ] .a suboptimal partition for a given is one in which every subinterval has approximately equal contribution to the total approximation error , which implies that regions of with higher convexity are approximated using more segments than regions with lower convexity .let us assume is large enough so that is approximately constant in each subinterval and therefore the bound is tight .consequently , for some constant , and the lengths of the subintervals ( local knot spacing ) should be chosen , i.e. , smaller intervals as increases .hence , the local knot distribution or density is so that more knots of the partition are placed in the regions with larger magnitude of the second derivative .then , the proposed approximation to the optimal partition is as follows : , , and take knots given by where the monotonically increasing function \to[0,1] ] is the total squared error for any partition is the sum of squared errors over all subintervals , and by , which , under the condition of error equalization for the proposed partition , becomes to compute , let us sum over all intervals and approximate the result using the riemann integral : whose right hand side is independent of . substituting in gives the desired approximate bound for the error in the interval ] is both cpwl approximations ( and ) converge to the true function at a rate of at least ( and ) .we use a similar procedure to derive an estimate error bound for the uniform partition that can be compared to that of the optimized one .[ thm : erroruniformpart]the approximation error of the linear interpolant in the uniform partition of the interval ] to achieve the required quality . distance for different approximations to the gaussian function over the interval ].,width=317 ] [ fig : graphicalerrorscauchy ] reports the distances between and the cpwl approximations described in previous sections , in the interval ] . for measured errors agree well with the predicted approximate error bounds , whereas for the measured errors slightly differ from the predicted ones ( specifically in the optimized partition ) because in these cases the scarce number of linear segments does not properly represent the oscillations . a sample optimized partition and the corresponding cpwl interpolant is also represented in [ fig : besselj0 ] .the knots of the partition are distributed according to the local knot density ( see [ fig : besselj0 ] , top ) , accumulating in the regions of high oscillations ( excluding the places around the zeros of the ) .this function is more complex to evaluate in general because it does not have a closed form .however , our approximation algorithm works equally well on it .we have measured the mean per - evaluation execution times in the cpu ( 39 ns using the posix extensions of the gnu c library and 130 ns only double - precision provided using the gnu scientific library ) and in the gpu ( 78 ps using the cuda standard library ) .we have also measured the execution time of the first term in the asymptotic expansion ( * ? ? ?* eq . 9.57a ) , valid for , . ] .top : local knot density ( ) corresponding to ; bottom : cpwl interpolant with ( knots ) overlaid on function .,width=317 ] distance for different cpwl approximations to the bessel function of the first kind in the interval ] , andlet be a polynomial of degree or less that interpolates the function at distinct points ] there exists a point ] is a continuous function and is an integrable function that does not change sign on the interval , then there exists a number such that since and for all , let us apply to compute for some .finally , if , it is straightforward to derive the error bound from the square root of .the approximation error corresponding to the line segment is where , by analogy with the form of we defined .the proof proceeds by computing the optimal values of and that minimize the squared error over the interval : the first term is given in .the second term is and the third term is , applying and the change of variables to evaluate the resulting integrals , for some and in . substituting previous results in , we may now find the line segment that minimizes the distance to by taking partial derivatives with respect to and , setting them to zero and solving the corresponding system of equations .indeed , the previous error is quadratic in and , and attains its minimum at the resulting minimum squared distance is .by the triangle inequality , the jump discontinuity at between two adjacent independently - optimized segments is where and are the offsets with respect to of the optimized segments at the left and right of , respectively ; and lie in the interval .since we are dealing with functions twice continuously differentiable in a closed interval , the absolute value terms in are bounded , according to the extreme value theorem ; therefore , if and decrease ( finer partition ) , the discontinuity jumps at the knots of the partition also decrease . in the limit , as , , i.e., continuity is satisfied .therefore the union of the independently - optimized segments , which is the unique piecewise linear function satisfying both continuity ( ) and minimization of the error .consequently , if is large we may approximate ; moreover , if is sufficiently small so that is approximately constant within it , , then we use corollary [ thm : minerrsmalllinesegment ] to get .v. borges , m. batista , and c. barcelos , `` a soft unsupervised two - phase image segmentation model based on global probability density functions , '' in _ ieee int .systems , man and cybernetics ( smc ) _ , oct 2011 , pp . 16871692 .k. kapinchev , f. barnes , a. bradu , and a. podoleanu , `` approaches to general purpose gpu acceleration of digital signal processing in optical coherence tomography systems , '' in _ ieee int .systems , man and cybernetics ( smc ) _ , oct 2013 , pp .25762580 .j. kim , e. park , x. cui , h. kim , and w. gruver , `` a fast feature extraction in object recognition using parallel processing on cpu and gpu , '' in _ ieee int .systems , man and cybernetics ( smc ) _ , oct 2009 , pp . 38423847 .s. balla - arabe , x. gao , and b. wang , `` gpu accelerated edge - region based level set evolution constrained by 2d gray - scale histogram , '' _ ieee trans . image process ._ , vol . 22 , no . 7 , pp .26882698 , 2013 .s. balla - arabe , x. gao , b. wang , f. yang , and v. brost , `` multi - kernel implicit curve evolution for selected texture region segmentation in vhr satellite images , '' _ ieee trans .remote sens ._ , vol .52 , no . 8 , pp .51835192 , aug 2014 .l. sigal , m. isard , h. haussecker , and m. black , `` loose - limbed people : estimating 3d human pose and motion using non - parametric belief propagation , '' _ int .j. comput . vision _ ,98 , no . 1 ,pp . 1548 , 2012 .c. cuevas , r. mohedano , and n. garca , `` adaptable bayesian classifier for spatiotemporal nonparametric moving object detection strategies . ''_ optics letters _ , vol .37 , no .15 , pp . 31593161 , aug .2012 .d. berjn , c. cuevas , f. morn , and n. garca , `` gpu - based implementation of an optimized nonparametric background modeling for real - time moving object detection , '' _ ieee trans ._ , vol .59 , no . 2 , pp .361369 , may 2013 .m. sylwestrzak , d. szlag , m. szkulmowski , i. gorczyska , d. bukowska , m. wojtkowski , and p. targowski , `` real time 3d structural and doppler oct imaging on graphics processing units , '' in _ proc .85710y.1em plus 0.5em minus 0.4emint . soc .optics and photonics , mar. 2013 .m. sehili , d. istrate , b. dorizzi , and j. boudy , `` daily sound recognition using a combination of gmm and svm for home automation , '' in _ proc .20th european signal process ._ , bucharest , 2012 , pp .16731677 .r. tanjad and s. wongsa , `` model structure selection strategy for wiener model identification with piecewise linearisation , '' in _ int ., telecommun . and( ecti - con ) _ , 2011 , pp . 553556. s. ghosh , a. ray , d. yadav , and b. m. karan , `` a genetic algorithm based clustering approach for piecewise linearization of nonlinear functions , '' in _ int .devices and communications _ , 2011 , ppg. gallego , d. berjn , and n. garca , `` optimal polygonal linearization and fast interpolation of nonlinear systems , '' _ ieee trans .circuits syst .i _ , vol .61 , no .11 , pp . 32253234 , nov 2014 .v. k. pallipuram , n. raut , x. ren , m. c. smith , and s. naik , `` a multi - node gpgpu implementation of non - linear anisotropic diffusion filter , '' in _ symp .application accelerators in high performance computing _ , jul .2012 , pp . 1118 . c. cuevas , n. garca , and l. salgado , `` a new strategy based on adaptive mixture of gaussians for real - time moving objects segmentation , '' in _ proc6811.1em plus 0.5em minus 0.4emint . soc .optics and photonics , 2008 . | many computer vision and human - computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation . however , rigorous evaluation of this kind of functions often implies a very high computational cost , unacceptable in real - time applications . to alleviate this problem , functions are commonly approximated by simpler piecewise - polynomial representations . following this idea , we propose a novel , efficient , and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals . to this end , we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations . it provides an improvement upon previous error estimates and allows the user to control the trade - off between the approximation error and the number of evaluation subintervals . to guarantee real - time operation , the method is suitable for , but not limited to , an efficient implementation in modern graphics processing units ( gpus ) , where it outperforms previous alternative approaches by exploiting the fixed - function interpolation routines present in their texture units . the proposed technique is a perfect match for any application requiring the evaluation of continuous functions ; we have measured in detail its quality and efficiency on several functions , and , in particular , the gaussian function because it is extensively used in many areas of computer vision and cybernetics , and it is expensive to evaluate . v # 1#2#1,#2 # 1y_#1 # 1y_#1 computer vision , image processing , numerical approximation and analysis , parallel processing , piecewise linearization , gaussian , lorentzian , bessel . |
we consider the model where the random vector and the random matrix are observed , the matrix is unknown , is an random noise matrix , is a random noise vector , is a vector of unknown parameters to be estimated , and is a given subset of .we consider the problem of estimating an -sparse vector ( i.e. , a vector having only non zero components ) , with possibly much larger than . if the matrix in is observed without error ( ) , this problem has been recently studied in numerous papers .the proposed estimators mainly rely on minimization techniques .in particular , this is the case for the widely used lasso and dantzig selector , see among others cands and tao ( 2007 ) , bunea et al .( 2007a , b ) , bickel et al .( 2009 ) , koltchinskii ( 2009 ) , the book by bhlmann and van de geer ( 2011 ) , the lecture notes by koltchinskii ( 2011 ) , belloni and chernozhukov ( 2011 ) and the references cited therein . however , it is shown in rosenbaum and tsybakov ( 2010 ) that dealing with a noisy observation of the regression matrix has severe consequences . in particular, the lasso and dantzig selector become very unstable in this context .an alternative procedure , called the matrix uncertainty selector ( mu selector for short ) is proposed in rosenbaum and tsybakov ( 2010 ) in order to account for the presence of noise .the mu selector is defined as a solution of the minimization problem where denotes the -norm , , is a given subset of characterizing the prior knowledge about , and the constants and depend on the level of the noises and respectively .if the noise terms and are deterministic , it is suggested in rosenbaum and tsybakov ( 2010 ) to choose such that and to take with such that where , for a matrix , we denote by its componentwise -norm . in this paper, we propose a modification of the mu selector for the model where is a random matrix with independent and zero mean entries such that the sums of expectations are finite and admit data - driven estimators .our main example where such estimators exist is the model with data missing at random ( see below ) .the idea underlying the new estimator is the following .in the ideal setting where there is no noise , the estimation strategy for is based on the matrix . when there is noise this is impossible since is not observed and so we have no other choice than using instead of .however , it is not hard to see that under the above assumptions on , the matrix appearing in contains a bias induced by the diagonal entries of the matrix whose expectations do not vanish .if can be estimated from the data , it is natural to make a bias correction .this leads to a new estimator defined as a solution of the minimization problem where is the diagonal matrix with entries , which are estimators of , and and are constants that will be specified later .this estimator will be called the compensated mu selector . in this paper , we show both theoretically and numerically that the estimator achieves better performance than the original mu selector . in particular , under natural conditions given below , the bounds on the error of the compensated mu selector decrease as up to logarithmic factors as , whereas for the original mu selector the corresponding bounds do not decrease with and can be only small if the noise is small .[ rem:1 ] the problem ( [ cmu ] ) is equivalent to where with the same and as in ( [ cmu ] ) ( see the proof in section [ sec : preuve ] ) .this simplifies in some cases the computation of the solution .an important example where the values can be estimated is given by the model with missing data .assume that the elements of the matrix are unobservable , and we can only observe where for each fixed , the factors are i.i.d .bernoulli random variables taking value 1 with probability and 0 with probability , .the data is missing if , which happens with probability . we can rewrite ( [ ex1 ] ) in the form where , .thus , we can reduce the model with missing data ( [ ex1 ] ) to the form ( [ 00 ] ) with a matrix whose elements have zero mean and variance .so , in section [ sec : stoch ] below , we show that when the are known , the admit good data - driven estimators .if the are unknown , they can be readily estimated by the empirical frequencies of 0 that we further denote by .then the appearing in ( [ ex2 ] ) are not available and should be replaced by .this slightly changes the model and implies a minor modification of the estimator ( cf .section [ sec : stoch]).consider the following random matrices where is the diagonal matrix with diagonal elements , , and for a square matrix , we denote by the matrix with the same dimensions as , the same diagonal elements as and all off - diagonal elements equal to zero . under conditions that will be specified below , the entries of the matrices are small with probability close to 1 .bounds on the -norms of the matrices characterize the stochastic error of the estimation .the accuracy of the estimators is determined by these bounds and by the properties of the gram matrix for a vector , we denote by the vector in that has the same coordinates as on the set of indices and zero coordinates on its complement .we denote by the cardinality of . to state our results in a general form , we follow gautier and tsybakov ( 2011 ) and introduce the sensitivity characteristics related to the action of the matrix on the cone .for ] , we define the _ sensitivity _ as follows : we will also consider the _ coordinate - wise sensitivities _ where is the coordinate of , . to get meaningful bounds for various types of estimation errors , we will need the positivity of or .as shown in gautier and tsybakov ( 2011 ) , this requirement is weaker than the usual assumptions related to the structure of the gram matrix , such as the restricted eigenvalue assumption and the coherence assumption . for completeness , we recall these two assumptions .* assumption re( ) . *_ let .there exists a constant such that for all subsets of of cardinality . _* assumption c. * _ all the diagonal elements of are equal to 1 and all its off - diagonal elements of satisfy the coherence condition : for some . _+ note that assumption c with implies assumption re( ) with , see bickel et al .( 2009 ) or lemma 2 in lounici ( 2008 ) . from proposition 4.2 of gautier and tsybakov ( 2011 )we get that , under assumption c with , which yields the control of the sensitivities for all since by proposition 4.1 of gautier and tsybakov ( 2011 ) .furthermore , proposition 9.2 of gautier and tsybakov ( 2011 ) implies that , under assumption re( ) , and by proposition 9.3 of that paper , under assumption re( ) for any and any , we have where .in this section , we give bounds on the estimation and prediction errors of the compensated mu selector . for , we consider the thresholds and , , such that and define and where and is a given subset of . for ,the compensated mu selector is defined as a solution of the minimization problem we have the following result .[ t1 ] assume that model ( [ 0])([00 ] ) is valid with an -sparse vector of parameters , where is a given subset of .for , set then , with probability at least , the set is not empty and for any solution of we have the proof of this theorem is given in section [ sec : preuve ] . note that ( [ t1:3 ] ) contains a bound on the prediction error under no assumption on : the other bounds in theorem [ t1 ] depend on the sensitivities . using ( [ k1 ] ) ( [ k4 ] ) we obtain the following corollary of theorem [ t1 ] .[ t2 ] let the assumptions of theorem [ t1 ] be satisfied .then , with probability at least , for any solution of we have the following inequalities .\(i ) under assumption re( ) : ( ii ) under assumption re( ) , : ( iii ) under assumption c with : where we set .if the components of and are subgaussian , the values are of order up to logarithmic factors , and the value is of the same order in the model with missing data ( see section [ sec : stoch ] ) .then , the bounds for the compensated mu selector in theorem [ t2 ] are decreasing with rate as .this is an advantage of the compensated mu selector as compared to the original mu selector , for which the corresponding bounds do not decrease with and can be small only if the noise is small ( cf .rosenbaum and tsybakov ( 2010 ) ) . if the matrix is observed without error ( ) , then , , and the compensated mu selector coincides with the dantzig selector . in this particular case , the results ( ii ) and ( iii ) of theorem [ t2 ] improve , in terms of the constants or the range of validity , upon the corresponding bounds in bickel et al .( 2009 ) and lounici ( 2008 ) .theorems [ t1 ] and [ t2 ] are stated with general thresholds and , and can be used both for random or deterministic noises ( in the latter case , ) and random or deterministic . in this section ,considering we first derive the values for random and with subgaussian entries , and then we specify and the matrix for the model with missing data .note that , for random and , the values and characterize the stochastic error of the estimator .recall that a zero - mean random variable is said to be -subgaussian ( ) if , for all , \leq \text{exp}(\gamma^2t^2/2).\ ] ] in particular , if is a zero - mean gaussian or bounded random variable , it is subgaussian .a zero - mean random variable will be called -subexponential if there exist and such that \leq \text{exp}(\gamma^2t^2/2 ) , \quad \forall \ satisfy the following assumption .* assumption n. * _ let , .the entries , of the matrix are zero - mean -subgaussian random variables , the rows of are independent , and for , . the components of the vector are independent zero - mean -subgaussian random variables satisfying , . _assumption n implies that the random variables , are subexponential . indeed , if two random variables and are subgaussian , then for some we have , which implies that ( [ subexp ] ) holds for with some whenever , cf . , e.g. , petrov ( 1995 ) , page 56 .next , is a zero - mean subexponential random variable with variance .it is easy to check that ( [ subexp ] ) holds for with and . to simplify the notation, we will use a rougher evaluation valid under assumption n , namely that all , are -subexponential with the same and , and all are -subexponential . here the constants and depend only on and . for and an integer , set [ lem : subexp ]let assumption n be satisfied , and let be a deterministic matrix with .then for any the bound ( [ pr2 ] ) holds with use the union bound and the facts that for a -subgaussian , and for a sum of independent -subexponential .consider now the model with missing data ( [ ex1 ] ) and assume that is non - random .then we have , which implies : = x_{ij}^2(1-\pi_j)\ , , \quad j=1,\dots , p.\ ] ] hence , is an unbiased estimator of .then defined in ( [ ex3 ] ) is naturally estimated by the matrix is then defined as a diagonal matrix with diagonal entries .it is not hard to prove that approximates in probability with rate up to a logarithmic factor .for example , let the probability that the data is missing be the same for all : .then where we have used the fact that , hoeffding s inequality and the notation .this proves ( [ pr1 ] ) with if is unknown , we replace it by the estimator , where denotes the indicator function .another difference is that appearing in ( [ ex2 ] ) are not available when s are unknown .therefore , we slightly modify the estimator using instead of ; we define as a solution of with where and are suitably chosen constants , is the matrix with entries , and is a diagonal matrix with entries this modification introduces in the bounds an additional term proportional to , which is of the order in probability and hence is negligible as compared to the error bound for the compensated mu selector . in this section ,we have considered non - random . using the same argument, it is easy to derive analogous expressions for and when is a random matrix with independent sub - gaussian entries , and , are independent from .the bounds of theorems [ t1 ] and [ t2 ] depend on the unknown matrix via the sensitivities , and therefore can not be used to provide confidence intervals . in this section ,we show how to address the issue of confidence intervals by deriving other type of bounds based on the empirical sensitivities .note first that the matrix is a natural estimator of the unknown gram matrix .it is -consistent in -norm under the conditions of the previous section .therefore , it makes sense to define the empirical counterparts of and by the relations : that we will call the _ empirical sensitivities _ can be efficiently computed for small or , alternatively , one can compute data - driven lower bounds on them for any using linear programming , cf . gautier and tsybakov ( 2011 ) .the following theorem establishes confidence intervals for -sparse vector based on the empirical sensitivities .[ t3 ] assume that model ( [ 0])([00 ] ) is valid with an -sparse vector of parameters , where is a given subset of .then , with probability at least , for any solution of we have where , and we set .set , and write for brevity using lemma [ lem3 ] in section [ sec : preuve ] , the fact that where is the set of non - zero components of ( cf .lemma 1 in rosenbaum and tsybakov ( 2010 ) ) and the definition of the empirical sensitivity , we find this and the definition of yield ( [ t3:1 ] ) .the proof of ( [ t3:2 ] ) is analogous , with used instead of .note that the bounds ( [ t3:1])([t3:2 ] ) remain valid for .therefore , if one gets an estimator of such that with high probability , it can be plugged in into the bounds in order to get completely feasible confidence intervals .we consider here the model with missing data ( [ ex1 ] ) .simulations in rosenbaum and tsybakov ( 2010 ) indicate that in this model the mu selector achieves better numerical performance than the lasso or the dantzig selector .here we compare the mu selector with the compensated mu selector .we design the numerical experiment the following way . we take a matrix of size ( ) which is the normalized version ( centered and then normalized so that all the diagonal elements of the associated gram matrix are equal to 1 ) of a matrix with i.i.d .standard gaussian entries .+ for a given integer , we randomly ( uniformly ) choose non - zero elements in a vector of size .the associated coefficients are set to , and all other coefficients are set to 0 .we take .+ we set , where a vector with i.i.d .zero mean and variance normal components , .+ we compute the values with as in , . ] and for all .( the value rather than its empirical counterpart , which is very close to , is used in the algorithm to simplify the computations ) .+ we run a linear programming algorithm to compute the solutions of and where we optimize over . to simplify the comparison with rosenbaum and tsybakov ( 2010 ), we write in the form with .in particular , corresponds to the dantzig selector based on the noisy matrix . in practice, one can use an empirical procedure of the choice of described in rosenbaum and tsybakov ( 2010 ) .the choice of is not crucial and influences only slightly the output of the algorithm .the results presented below correspond to chosen in the same way as in the numerical study in rosenbaum and tsybakov ( 2010 ) .+ we compute the error measures .+ for each value of we run monte carlo simulations. tables 15 present the empirical averages and standard deviations ( in brackets ) of , , of the number of non - zero coefficients in ( ) and of the number of non - zero coefficients in belonging to the true sparsity pattern ( ) .we also present the total number of simulations where the sparsity pattern is exactly retrieved ( exact ) .the lines with " for correspond to the mu selector and those with " to the compensated mu selector . + [ cols="^,^,^,^,^,^",options="header " , ] the results of the simulations are quite convincing .indeed , the compensated mu selector improves upon the mu selector with respect to all the considered criteria , in particular when is very sparse ( ) .the order of magnitude of the improvement is such that , for the best , the errors and are divided by .the improvement is not so significant for larger , especially for when the model starts to be not very sparse .for all the values of , the non - zero coefficients of are systematically in the sparsity pattern both of the mu selector and of the compensated mu selector .the total number of non - zero coefficients is always smaller ( i.e. , closer to the correct one ) for the compensated mu selector .finally , note that the best results for the error measures and are obtained with , while the sparsity pattern is better retrieved for .this reflects a trade - off between estimation and selection .* proof of remark [ rem:1 ] . * it is enough to show that where let first . using the triangle inequality , we easily get that .now take .we set and consider defined by for , where and are the components of and respectively .it is easy to check that , which concludes the proof .we first write that is equal to by definition of the and , with probability at least we have therefore with probability at least . throughout the proof, we assume that we are on event of probability at least where inequalities hold and .we have consequently , using that , we easily get that is not greater than now remark that finally , using that together with the fact that , we obtain the result .we now proceed to the proof of theorem [ t1 ] .the bounds ( [ t1:1 ] ) and ( [ t1:2 ] ) follow from lemma [ lem4 ] , the fact that where is the set of non - zero components of ( cf . lemma 1 in rosenbaum and tsybakov ( 2010 ) ) and the definition of the sensitivities , . to prove ( [ t1:3 ] ) , first note that and use ( [ t1:1 ] ) with and lemma [ lem4 ] .this yields the first term under the minimum on the right hand side of ( [ t1:3 ] ) .the second term is obtained again from ( [ p1 ] ) , lemma [ lem4 ] and the inequality .* the bounds ( [ t2:1 ] ) and ( [ t2:4 ] ) follow by combining ( [ t1:1 ] ) with ( [ k3 ] ) and with ( [ k1 ] ) ( [ k2 ] ) respectively .next , ( [ t2:2 ] ) follows from ( [ t1:3 ] ) and ( [ k3 ] ) . also , as an easy consequence of ( [ t1:1 ] ) and ( [ k4 ] ) with we get finally , ( [ t2:3 ] ) follows from this inequality and ( [ t2:1 ] ) using the interpolation formula , and the fact that .belloni , a. , and chernozhukov , v. ( 2011 ) .high dimensional sparse econometric models : an introduction . in : _ inverse problems and high dimensional estimation , stats in the chteau 2009 _ , alquier , p. , e. gautier , and g. stoltz , eds . , _ lecture notes in statistics _ , * 203 * 127162 , springer , berlin . | we consider the regression model with observation error in the design : here the random vector and the random matrix are observed , the matrix is unknown , is an random noise matrix , is a random noise vector , and is a vector of unknown parameters to be estimated . we consider the setting where the dimension can be much larger than the sample size and is sparse . because of the presence of the noise matrix , the commonly used lasso and dantzig selector are unstable . an alternative procedure called the matrix uncertainty ( mu ) selector has been proposed in rosenbaum and tsybakov ( 2010 ) in order to account for the noise . the properties of the mu selector have been studied in rosenbaum and tsybakov ( 2010 ) for sparse under the assumption that the noise matrix is deterministic and its values are small . in this paper , we propose a modification of the mu selector when is a random matrix with zero - mean entries having the variances that can be estimated . this is , for example , the case in the model where the entries of are missing at random . we show both theoretically and numerically that , under these conditions , the new estimator called the compensated mu selector achieves better accuracy of estimation than the original mu selector . |
* the length is sufficient to characterize the exponential growth of each cell*. various techniques have been put forth to analyze cell morphology gathered from single cell images .recent work on image analysis of single cells has attempted to optimize two problems : separation of distinct ( but potentially overlapping ) cells and accurate determination of the edge of each cell . because crowding is not an issue in our setup, we could focus solely on constructing an algorithm to delineate each cell contour accurately and precisely . as shown in fig .1a , b and described in the methods section , we first segment each cell using pixel - based edge detection similar to , then perform spline interpolation to determine the cell contour at sub - pixel resolution .the sequence of such images for each single cell constitutes a trajectory in time that serves as the basis for quantitative analysis .division events are then detected in an automated fashion using custom python code , and used to divide time trajectories for each cell into individual generations .all data shown here were obtained by observing 260 single _ c. crescentus _ cells perfused in complex medium ( peptone - yeast extract ; pye ) at 31 over the course of 2 days ( corresponding to 9672 separate generations ) . under these conditions ,the mean population growth rate and division time remain constant , so we treat the trajectories of individual generations as members of a single ensemble . in other words, we segment each cell trajectory by generation and take the resulting initial frame ( i.e. , immediately following division ) as minutes . in order to average over the ensemble , we then bin quantitative information according to time since division , , normalized by the respective division time .the normalized time , , serves as a cell - cycle phase variable . for our quantitative analysis , we focus on a set of three intuitive and independent parameters that characterize cell shape at each stage of growth : length , width , and radius of curvature ( fig . 1c ) .they are calculated directly from each splined contour as follows ( see also supplementary fig .1 ) : * we define the length , , as the pole - to - pole distance along the contour of the cell medial axis at the normalized time ( fig . 2a ) . * we assign a single radius of curvature , , to each cell based upon the best - fit circle to the medial axis ( fig .although stalked ( ) and swarmer ( ) portions may be described by different radii of curvature toward the end of the cell cycle , the average radius obtained by averaging the contributions of each portion yields the same value , i.e. , ( see supplementary fig . 2c ) .* we define the width , as the length of the perpendicular segment spanning from one side of the cell contour to the other at each position along the medial axis , which runs from at the stalked pole to at the swarmer pole .furthermore , we spatially averaged the width over positions along the medial axis , , to obtain a characteristic width at each time point ( fig . 2c ) .the mean division time is min , where indicates a population average .we find that increases exponentially with time constant min , essentially the same time constant that we previously observed for the cross - sectional area , while and remain approximately constant for and each shows a dip for when cell constriction becomes prominent .the sharp rise in seen for results from independent alignment of the stalked and swarmer portions with the microfluidic flow as they become able to move independently ( i.e. , fluctuate easily about the plane of constriction ) .these observations confirm the assumptions in that the length is sufficient to describe the growth of cell size .moreover , we can track the dynamics of the spanning angle , , using the relation . *mechanical model for cell shape and growth*. there are many details of cell growth and shape that require interpretation . for example , it is not obvious _ a priori _ that growth should be almost exclusively longitudinal .therefore , we have developed a minimal mechanical model that can explain these observations .we parametrize the geometry of the cell wall by a collection of shape variables , where , , and are the parameters introduced above ( fig .as the cell grows in overall size , we postulate that the rate of growth in the shape parameter is proportional to the net decrease in cell wall energy , , per unit change in .assuming linear response , the configurational rate of strain , , is proportional to the corresponding driving force , in analogy with the constitutive law of newtonian flow : where the constant describes the rate of irreversible flow corresponding to the variable . according to equation , exponential growth occurs if is constant , whereas reaches a steady - state value if along with the condition .it thus remains to specify the form of . for a _c. crescentus _ cell of total volume and surface area , our model for the total energy in the cell wall is given by where is a constant pressure driving cell wall expansion ; is the tension on the surface of the cell wall ; is the energy required to maintain the cell width ; represents the mechanical energy required to maintain the crescent cell shape ; is the energy driving cell wall constriction .traditionally was taken to be the turgor pressure ; while the importance of the turgor pressure has recently been questioned , an effective pressure must still arise from the synthesis and insertion of peptidoglycan strands that constitute the cell wall .we note that a purely elastic description of cell wall mechanics would lead to a curvature - dependent surface tension .however , if growth is similar to plastic deformation , the tension is uniform .the effective tension in our model depends on the local surface curvatures through the energy terms and , that describe harmonic wells around preferred values of surface curvatures .the mechanical energy for maintaining width is given by where the constant is the preferred radius of curvature , is the bending rigidity and is a differential area element .contributions to can come from the peptidoglycan cell wall as well as membrane - associated cytoskeletal proteins like mreb , mrec , rodz , etc . , which are known to control cell width .in addition to maintaining a constant average width , _ c. crescentus _ cells exhibit a characteristic crescent shape , which relies on expression of the intermediate filament - like protein crescentin .although the mechanism by which crescentin acts is not known , various models have been proposed , including modulation of elongation rates across the cell wall and bundling with a preferred curvature .we assume the latter and write the energy for maintaining the crescent shape as where is the arc - length parameter along the crescentin bundle attached to the cell wall , is the local curvature , is the preferred radius of curvature , is the contour length , and is the linear bending rigidity . equation accounts for the compressive stresses generated by the crescentin bundle on one side of the cell wall , leading to a reduced rate of cell growth , according to equation . as a result, the cell wall grows differentially and maintains a non - zero curvature of the centerline . in the absence of crescentin ( ), our model predicts an exponential decay in the cell curvature that leads to a straight morphology , consistent with previous observations . finally , one must also account for the energy driving cell wall constriction .constriction proceeds via insertion of new peptidoglycan material at the constriction site .this process leads to the formation of daughter pole caps .we take constriction to be governed by an energy of the form , where is the surface area of the septal cell wall , and is the energy per unit area released during peptidoglycan insertion .* there exists an optimal cell geometry for a given mechanical energy*. to apply the model introduced above ( equations and ) to interpreting the data in fig . 2, we assume a minimal cell geometry given by a toroidal segment with uniform radius of curvature , uniform cross - sectional width and the spanning angle . to this end, we estimate as many mechanical parameters as we can from the literature and then determine the rest by fitting our experimentally measured values .turgor pressure in gram - negative bacteria has been measured to be in the range mpa .we use a value for the effective internal pressure close to the higher end of the measured values for turgor pressure , mpa , in order to account for peptidoglycan insertion .we estimate the surface tension as nn m ( see supplementary model section ) and multiply it by the cell surface area to obtain the cell wall surface energy .first , we neglect cell constriction ( setting ) and assume that the crescentin structure spans the length of the cell wall ( excluding the endcaps ) , with a contour length . the mechanical properties of mreb and crescentin are likely similar to those of f - actin and intermediate filaments , respectively .however , due to a lack of direct measurements , we obtain the mechanical parameters and by fitting the model to the experimental data .as desired , we find that the total energy has a stable absolute minimum at particular values of the cross - section diameter and the centerline radius , given by solution of ( see supplementary fig . 4 ) .the measured values are m and m ( ) , and , as indicated by the red solid curves in fig .2b , c , the model reproduces them with nn m and nn .while the fitted value for is numerically close to the estimate based on the known mechanical properties of intermediate filaments ( nn ) , the value for is much higher than the bending rigidity of mreb bundles ( see supplementary information ) .this indicates that is only determined in part by mreb and can have contributions from the cell wall .given stable values for and , growth is completely described by the dynamics of the angle variable .consequently , we write the total energy in the scaling form , with the energy density along the longitudinal direction .the condition for growth then becomes , such that the energy is minimized for increasing values of . from our experimental data ,the angle spanned by the cell centerline increases by an amount during the entire cycle . using our parameter estimates and fitting the data in fig . 2, we obtain a numerical value for the energy density nn m .we relate the angle dynamics to the length by where ( ) is the rate of longitudinal growth , which can be interpreted as resulting from remodeling of peptidoglycan subunits with a mean current , across the cell surface area . from an exponential fit to the data for cell length ( fig .2a ) , we obtain ( nn m min) , which gives us an estimate of the friction coefficient , nn min , associated with longitudinal growth ; e.g. , mreb motion that is known to correlate strongly with the insertion of peptidoglycan strands .our results are consistent with previous observations of _ c. crescentus _ cells with arrested division but continued growth .* constriction begins early and proceeds with the same time constant as exponential growth*. having characterized the dynamics of growth , we now turn to constriction at the division plane .as mentioned above , we obtain the experimental width at each point along each cell s medial axis .the typical width profile is non - uniform along its length , exhibiting a pronounced invagination near the cell center ( with width ; fig .this invagination , which ultimately becomes the division plane , is readily identifiable early in the cell cycle , even before noticeable constriction occurs .we discuss the kinetics of constriction in this section , and focus on its location later in the manuscript . as shown in fig .3a ( black points ) , progressively decreases towards zero until pinching off at . due to the limited spatial resolution of our imaging ( phase contrast microscopy ) , the pinch - off process occurring for could not be captured , but at earlier times ( i.e. , ) is precisely determined as a function of . to model the dynamics of constriction , we assume as in ref . ( fig .3a , inset ) : ( i ) the shape of the zone of constriction is given by two intersecting and partially formed hemispheres with radii ; and ( ii ) constriction proceeds by completing the missing parts of the hemisphere such that the newly formed cell wall surface maintains the curvature of the pre - formed spherical segments .as a result , a simple geometric formula is obtained that relates the width of the constriction zone , , to the surface area of the newly formed cell wall , where is the maximum surface area achieved by the caps as the constriction process is completed , i.e. , when .we assume that the addition of new cell wall near the division plane initiates with a rate , , and thereafter grows exponentially with a rate , , according to , subject to the initial condition . the first term on the right - hand side of equation follows from equation , using as the shape variable , after incorporating the constriction energy .the rate of septal peptidoglycan synthesis , , is thus directly proportional to the energy per unit area released during constriction , .the solution , , can then be substituted into equation to derive the time - dependence of , whose dynamics is controlled by two time scales : and .fitting equation with the data for , we obtain min and min .the fitted values for the time constants controlling constriction dynamics ( and ) are remarkably similar to that of exponential cell elongation ( min ) .this shows that septal growth proceeds at a rate comparable to longitudinal growth .therefore , one of the main conclusions that we draw is that cell wall constriction ( fig .3a ) is controlled by the same time constant as exponential longitudinal growth ( fig .2a ) . having determined the dynamics of , we compute the average width across the entire cell using the simplified shape of the constriction zone as shown in fig . 3a ( inset ) .the resultant prediction ( blue solid curve in fig .2c ) is in excellent agreement with the experimental data and captures the dip in seen for .constriction also leads to a drop in the average radius of curvature of the centerline , as shown by the experimental data in fig .in the supplementary material we derive a relation between the centerline radius of curvature and the minimum width , given by , predicting that cell curvature increases at the same rate as drops . using this relation , we are able to quantitatively capture the dip in seen for ( solid blue curve in fig .2b ) without invoking any additional fitting parameters . *origin of the asymmetric location of the primary invagination*. we now consider the position of the division plane and its interplay with cell shape . as shown in fig .3b , the distance of the width minimum from the stalked pole ( ) increases through the cell cycle at the same rate as the full length of the growing cell ( ) , such that their ratio remains constant with time - averaged mean .the presence of the primary invagination early in the cell cycle is reiterated in fig .3c , which shows the width profile constructed by ensemble - averaging over each cell at the timepoint immediately following division .in addition to the width minimum , there are two characteristic maxima near either pole , and , respectively ( fig . 3c , inset ) . as evident in fig . 3c, the stalked pole diameter is on average larger than its swarmer counterpart ( also see supplementary fig .we show that the asymmetric location of the invagination ( and the asymmetric width profile ) can originate from the distinct mechanical properties inherent to the pole caps in _ c. crescentus_. the shapes of the cell poles can be explained by laplace s law that relates the pressure difference , , across the cell wall to the surface tensions in the stalked or the swarmer pole , .the radii of curvature of the poles then follow from laplace s law where the superscript ( ) denotes the stalked or the swarmer pole .thus a larger radius of curvature in the poles has to be compensated by a higher surface tension to maintain a constant pressure difference . assuming that the poles form hemispheres , we have .our data indicate that the early time ratio for ( ) shows a strong positive correlation with the ratio , with an average value ( see supplementary fig 3a ) .laplace s law then requires that the stalked pole be mechanically stiffer than the swarmer pole ; .this observation suggests that the asymmetry in the lengths of the stalked and swarmer parts of the cell depends upon different mechanical properties of the respective poles .to quantitatively support this claim , we investigate an effective contour model for the cell shape . to this end , we assume that the fluctuations in cell shape relax more rapidly than the time scale of growth .this separation of timescales allows us to derive the equation governing the cell contour by minimizing the total mechanical energy ( equation ) . from the solution we compute the resultant width profile for the entire cell ( see supplementary model section ) . as shown in fig .3d , the model with asymmetric surface tensions of the poles causes the primary invagination to occur away from the cell mid - plane .the spatial location of the invagination relative to the cell length depends linearly on the ratio .symmetry is restored for , as shown in fig .3d ( blue dashed curve ) .we note that a gradient in along the cell body would imply differences in longitudinal growth rates between the stalked and the swarmer portions of the cell ( eq . ) .our data exclude this possibility since both and grow at the same rate , as evidenced by the constancy of their ratio ( fig .3b and supplementary fig .because _ c. crescentus _ does not exhibit polar growth , the _ polar stiffness model _ is consistent with the observed uniformity in longitudinal growth rate .in addition , the non - uniformity in cell width comes from the differences in mechanical response in the cell wall due to preferential attachment of crescentin along the concave sidewall . for a cres mutant cell ( where ) , our model predicts a uniform width profile before the onset of constriction .* cell shape evolution during wall constriction*. the experimental width profiles show that the growing and constricting cells typically develop a second minimum in width ( fig .these secondary invaginations are observed in both the stalked and swarmer portions of single cells in the predivisional stage ( ) , although they are more common in the stalked portions ( fig .we show here that these secondary minima become the primary minima in each of the daughter cells . to study the dynamics of the development of the secondary minimum we introduce a new quantity , , defined as the distance from the stalked pole to the secondary minimum in the stalked part ( see fig . 5c , inset )we find that the ratio has a mean value of 0.55 at later points in the cell cycle ( fig .5b ) , equal to the constant ratio maintained by the distance from the stalked pole to the primary minimum , .in fact the kymograph of width profiles ( shown over 2 generations for a representative single cell ) in fig .5c demonstrates that the predivisional secondary invaginations are inherited as primary invaginations after division .this mechanism provides continuity and inheritance of the invaginations across generations and is an intrinsic element of the mechanism for cell division in _ c. crescentus_. to quantitatively explain the experimental width profiles during constriction , we use our mechanical model to determine the instantaneous cell shape by minimizing the total energy ( equation ) at the specified time points ( see supplementary model section ) . to take constriction into account, we impose the constraint that , where is determined by equations and .in addition , we assume non - uniform materials properties in the cell wall by taking the tension in the cell poles ( ) and the septal region to be higher than the rest of the cell . as constriction proceeds and decreases , we compute the shape of the cell contours ( fig .4c ) and the corresponding width profiles ( fig .the computed width profiles faithfully reproduce the secondary invaginations , which become more pronounced as the daughter pole caps become prominent .an example of the experimental width profiles is shown in fig .4b at evenly - spaced intervals in time for a single generation , and the corresponding model width profiles are shown in fig .we note that the experimental cell contours in the predivisional stage ( ) bend away from the initial midline axis and develop an alternate growth direction ( fig . 4a , blue contour ) .these bend deformations are induced by the microfluidic flow about the pinch - off plane ; the cells become increasingly `` floppy '' as the constriction proceeds .the consistent propagation of a specific shape through the processes of growth and division relies upon an intricate interplay between the controlled spatiotemporal expression and localization of proteins , and cytoskeletal structural elements .the high statistical precision of our measurements allows us to gain new insights into cell morphology . from precise determination of cell contours over time , we observe that a typical cell width profile is non - uniform at all times with a pronounced primary invagination appearing during the earliest stages of the cell cycle . during cell constriction ,the decrease in the minimum width is governed by the same time constant as exponential axial growth ( fig .furthermore , the location of the primary invagination divides the cell contour into its stalked and swarmer compartments , such that the ratio of the length of the stalked part to the total pole - to - pole length remains constant during the cycle with a mean value ( fig .these observations and our mechanical model lead to two important conclusions : first , _ the dynamics of cell wall constriction and septal growth occur concomitantly _ , and second , _ the asymmetric location of the primary invagination can be explained by the differences in mechanical properties in the stalked and swarmer poles_. a corollary of the first conclusion is that the size ratio threshold at division occurs naturally without requiring a complex timing mechanism .in addition to the primary septal invagination , the cell contours exhibit a pronounced secondary invagination during the predivisional stages ( fig .remarkably , the secondary invaginations develop at a precise location relative to the total length of the stalked compartments , ( fig .the data thus allow a third conclusion : _ these secondary invaginations are inherited as primary invaginations in each of the daughter cells , directing the formation of the division plane in the next generation_. thus , through consistent and controlled nucleation of invaginations across generations , _ c. crescentus _ cells maintain a constant ratio of the sizes of stalked and swarmer daughter cells .our experimental observations and the parameters in the cell shape model can be related to the current molecular understanding for gram - negative bacteria , in particular _ c. crescentus_. before the onset of noticeable constriction , cell shape is dictated by the mechanical properties of the peptidoglycan cell wall in addition to various shape - controlling proteins such as mreb , mrec , rodz and cres .single molecule tracking studies have revealed that mreb forms short filamentous bundles anchored to the inner surface of the cell wall and moves circumferentially at a rate much faster than the rate of cell growth . _ in vitro _experiments show that mreb filaments can induce indentation of lipid membranes , suggesting that they may have a preferred radius of curvature .thus on time scales comparable to cell growth , is determined in part by the energy cost of adhering mreb bundles to the cell wall ( see supplementary model section ) .bacterial cell division is driven by a large complex of proteins , commonly known as divisomes that assemble into the z - ring structure near the longitudinal mid - plane of the cell .the z - ring contains ftsz protofilaments that are assembled in a patchy band - like structure .ftsz protofilaments are anchored to the cell membrane via ftsa and zipa , and play a crucial role in driving cell wall constriction . during constriction, the divisome proteins also control peptidoglycan synthesis and direct the formation of new cell wall via the activity of penicillin - binding proteins ( pbps ) .thus the divisome plays a two - fold role by concomitantly guiding cell wall constriction and growth of the septal peptidoglycan layer . according to our model the constriction of the cell wallis driven by the synthesis of septal cell wall at a rate ( ) , which can be directly related to the activity of pbps triggered by the divisome assembly .furthermore , in our model it is sufficient that the divisome guide the curvature of cell wall growth in the septal region ( see fig .3a , inset ) .while the mechanism behind the precise asymmetric location of the division plane in _ c. crescentus _ cells is not well understood , it is likely that the atpase mipz helps division site placement by exhibiting an asymmetric concentration gradient during the predivisional stage .mipz activity inhibits ftsz assembly ; as a result of polar localization of mipz , z - ring assembly is promoted near the mid - cell .our cell shape model suggests that the early time asymmetric location of the primary invagination , which develops into the division plane , is controlled by the differences in surface tensions maintained in the poles .the presence of this invagination at , as inherited from the secondary invaginations in the previous generation , aids in z - ring assembly at the site of the invagination .the curvature - sensing capability of the z - ring may be enabled by the minimization of the ftsz polymer conformational energy that is determined by the difference between cell surface curvature and ftsz spontaneous curvature .a higher tension in the stalked pole can be induced by asymmetric localization of polar proteins , such as popz , early in the cell cycle .experiments have shown that popz localizes to the stalked pole during the initial phase of the cell cycle and increasingly accumulates at the swarmer pole as the cell cycle proceeds .consistent with this observation , our data show that the correlation between the pole sizes ( determined by the ratio of surface tension to pressure ) and the stalked and swarmer compartment lengths tend to disappear later in the cycle ( supplementary fig .3 ) , as cell constriction proceeds .a recent experimental study also demonstrates that molecular perturbation of clp proteases can destroy the asymmetry of cell division in _ c. crescentus _ , suggesting the interplay of subcellular protease activity with the physical properties of the cell wall .earlier theoretical models have predicted that a small amount of pinch - off force from the z - ring ( pn ) is sufficient to accomplish division by establishing a direction along which new peptidoglycan strands can be inserted . in contrast , our data combined with the mathematical model allows the interpretation that _ the early time asymmetric invagination in the cell wall can set the direction for the insertion of new peptidoglycan strands_. constriction results from exponential growth of surface area in the septum ( at the same rate as longitudinal extension ) .the instantaneous cell shape is determined by minimizing the energy functional at given values of the cell size parameters .finally , from our estimate of the cell wall energy density ( nn m ) , we predict that a net amount nn m of mechanical energy is used by the peptidoglycan network for cell wall growth . for a _c. crescentus _ cell of surface area , layered with glycan strands of length nm and cross - linked by peptide chains with maximally stretched length nm , there are roughly peptidoglycan subunits . thus on average , each peptidoglycan subunit can consume mechanical energy of .4 nn m , or .6 at a temperature .cell wall remodeling and insertion of new peptidoglycan material can likely create defects in the peptidoglycan network .one thus expects cellular materials properties to change over time , as a result of these molecular scale fluctuations .although we neglect such variations in our mean field model , it nonetheless quantitatively captures the average trends in cell shape features . in futurework we plan to more closely connect the energy terms of the continuum model with molecular details .* acquisition of experimental data*. data were acquired as in ref . .briefly , the inducibly - sticky _ caulobacter crescentus _ strain fc1428 was introduced into a microfluidic device and cells were incubated for one hour in the presence of the vanillate inducer .the device was placed inside a homemade acrylic microscope enclosure ( ) equilibrated to 31 ( temperature controller : csc32j , omega and heater fan : hgl419 , omega ) . at the start of the experiment ,complex medium ( peptone - yeast extract ; pye ) was infused through the channel at a constant flow rate of 7 / min ( phd2000 , harvard apparatus ) , which flushed out non - adherent cells . a microscope( nikon ti eclipse with the `` perfect focus '' system ) and robotic xy stage ( prior scientific proscan iii ) under computerized control ( labview 8.6 , national instrument ) were used to acquire phase - contrast images at a magnification of 250x ( emccd : andor ixon+ du888 1k 1k pixels , objective : nikon plan fluor 100x oil objective plus 2.5x expander , lamp : nikon c - hfgi ) and a frame rate of 1 frame / min for 15 unique fields of view over 48 hours . in the present study we use a dataset consisting of 260 cells , corresponding to 9672 generations ( division events ) .* analysis of single cell shape*. the acquired phase - contrast images were analyzed using a novel routine we developed ( written in python ) .each image was processed with a pixel - based edge detection algorithm that applied a local smoothing filter , followed by a bottom - hat operation .the boundary of each cell was identified by thresholding the filtered image .a smoothing b - spline was interpolated through the boundary pixels to construct each cell contour .each identified cell was then tracked over time to build a full time series .we chose to include only cells that divided for more than 10 generations in the analysis .a minimal amount of filtering was applied to each growth curve to remove spurious points ( e.g. , resulting from cells coming together and touching , or cells twisting out of plane ) .the timing of every division was verified by visual inspection of the corresponding phase contrast images , so that the error in this quantity is approximately set by the image acquisition rate of 1 frame / min .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 & . _ _ * * , ( ) . & . _ _ * * , ( ) . &_ _ * * , ( ) ._ _ ( , , ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . _ _ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) ._ _ ( , , ) . , & . _ _ * * , ( ) . & . _ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , , &. _ _ * * , ( ) . , , & ._ _ * * , ( ) . & _ _ * * , ( ) . & _ _* * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , , & ._ _ * * , ( ) ._ _ * * , ( ) . , , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , , & ._ _ * * , ( ) ._ et al . _ . _ _ ( ) . , , & . __ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , &_ _ * * , ( ) . & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) .we gratefully acknowledge funding from nsf physics of living systems ( nsf phy-1305542 ) , nsf materials research science and engineering center ( mrsec ) at the university of chicago ( nsf dmr-1420709 ) , the w. m. keck foundation and the graduate program in biophysical sciences at the university of chicago ( t32 eb009412/eb / nibib nih hhs / united states ) .n.f.s . also thanks the office of naval research ( onr ) for a national security science and engineering faculty fellowship ( nsseff ) .c.s.w . , s.i.b . , a.r.d . , and n.f.s .designed the experiments ; s.b ., a.r.d . , and n.f.s .designed the model ; c.s.w . and s.i.b .performed the experiments and observed the phenomena reported ; c.s.w . designed and implemented custom software to automate cell shape image analysis ; c.s.w . and s.b . analyzed the data ; s.c .contributed reagents and materials ; c.s.w ., a.r.d . , and n.f.s .wrote the manuscript ; all authors discussed the results and commented on the manuscript .the authors declare no competing financial interests . * supplemental material * biblabel[1][s#1 ]a number of quantities can be immediately calculated from each splined cell contour ( supplementary fig .1a ) , including the cross - sectional area .the cell medial axis was determined by calculating the voronoi diagram of the cell contour [ s ] , pruning the branches ( supplementary fig .1b ) , and extending this skeleton to the edges of the cell contour in a manner that preserves average curvature of the medial axis ( supplementary fig .the intersections between the cell medial axis and contour represent the stalked and swarmer poles , respectively .the cell length was calculated by evaluating the distance along the medial axis between either pole .the cell widths were determined by creating ribs perpendicular to the medial axis along its length and determining the distances between their intersections with opposite sides of the cell contour ( supplementary fig .to calculate time - averaged quantities , we normalized trajectories for each generation by the respective division time ( thus converting each variable to a function of ) .we then split these data into 73 bins , as min under these conditions , and ensemble - averaged each of these bins over every generation .we can define five specific values of the width according to the local minima and maxima in the width profile ( , , , , ) , not all of which may be present in any given cell . where we identify a primary minimum in the width profile , , we can also determine two local maxima , and , corresponding respectively to the stalked and swarmer portions ( supplementary fig .these values are approximately constant throughout the cell cycle , with . however , at later times ( ) the value of increases until . in some cases , additional secondary local minimaare observed , and , corresponding respectively to the stalked and swarmer portions ( supplementary fig .although we note the value of these quantities for early times here ( where they are approximately equal to their respective local maxima ) , these minima can only be determined with certainty at later times ( ) .there , we observe the presence of a secondary minimum in the stalked portions of most cells .p5.5em p6.5em p3em l + & mean s.d . & & + + & 0.74 0.02 & m & ] + & 0.84 0.03 & m & ] + & 0.82 0.02 & m & ] + & 4.34 2.50 & m & ] + & 54.3 5.2 & % & ] + + we split each cell into stalked and swarmer portions according to the location of the primary minimum in the width profile .the radius of curvature ( ) was calculated as the radius of the best - fit circle to the cell medial axis .we tested two methods of determining the radius of curvature : ( i ) fitting the whole cell to a circle or ( ii ) fitting the stalked and swarmer portions separately . at early times in the cell cycle , ( ii ) gives poorer results because of fewer data points . at later times in the cell cycle , ( i ) gives poorer results because the stalked and swarmer portions can indeed have differing radii of curvature , which we attribute to the alignment of the swarmer portion of the cell with the direction of fluid flow after the division plane has narrowed enough that it becomes mechanically decoupled from the stalked portion , i.e. , like a flexible hinge .however , in the mean the results of ( ii ) are equal to the value calculated by ( i ) for earlier times .therefore , we use only data from method ( i ) but exclude later time points ( supplementary fig .because of the variation in the values of the calculated radius of curvature was large , such that we can not define a reasonable arithmetic mean without arbitrarily filtering the dataset , we first averaged the corresponding unsigned curvature ( equivalent to ) and then converted to radius of curvature ( i.e. , what we report as is actually the harmonic mean , calculated as ) .the length was also split according to the locations of the local minima , into ( distance along cell medial axis from stalked pole to ) , ( distance along cell medial axis from swarmer pole to ) , ( distance along cell medial axis from stalked pole to ) , and ( distance along cell medial axis from swarmer pole to ) .the value of and are compared in supplementary fig .note that the length of the stalked portion grows exponentially , with the same time constant as the length ( as does , although it is not shown here for clarity ) , which is a necessary condition for the addition of peptidoglycan material along the entire length of the cell when the location of is set at early times . at later times ( ) , the length from stalked pole to secondary minimum also starts to increase .we relate the asymmetry in the length of stalked and swarmer portions at early times to asymmetries in the stalked and swarmer poles .the model predicts a linear relationship between the ratio of lengths and the ratio of the tensions at either pole . we can not directly measure the latter quantity , but from laplace s law it is equal to the inverse ratio of the mean curvatures at either pole , which we approximate as the inverse of half the maximum pole width , assuming that the poles are hemispheres with diameter and , respectively .supplementary fig .3 shows scatter plots comparing to for three different time intervals .the red best - fit line is shown only for supplementary fig .3a , for which ( linear fits to all other plots produced values of ) .this line runs from the point , corresponding to the dashed blue curve in fig .3c of the main text ( which is the case of a symmetric cell ) to , corresponding to the red solid curve in fig .3d of the main text ( which is the case of the average asymmetric _ c. crescentus _ cell ) . note that at times , any correlation disappears . in order to quantify the error in our width profiles , we imaged a single field of view of 24 _ c. crescentus _ cells perfused in complex medium at 31 at a frame rate of 5 frames per second ( 300 times faster than the frame rate used to acquire all other data ) , and calculated the splined contours for each cell .we focused in particular on a single `` dead '' ( non - growing and non - dividing ) cell , and found the root - mean - square deviation ( rmsd ) of nearest points along the cell contour between subsequent frames to be 12 nm .in addition , we found the rmsd of equivalent points along the width profile between subsequent frames to be 28 nm , or a 3.2% pinch depth at the average value of = 0.85 m .* cell wall mechanics*. the total energy for the bacterial cell wall is given as the sum of contributions from an active internal pressure driving cell volume ( ) expansion , mechanical energy in the cell wall , and the mechanical energy of interactions with cytoskeletal proteins : the bacterial cell wall consists of a network of glycan strands cross - linked by peptide chains know as the peptidoglycan network .growth occurs via the insertion of new peptidoglycan strands into the existing network along with the breaking of existing bonds due to turgor pressure induced stretching .we assume that elastic equilibrium is reached rapidly as compared to the rate of synthesis of new material [ s ] . as a result of cell wall remodeling and irreversible elongation, growth can be understood as resulting from plastic deformations [ s ] .to understand the origin of the cell wall tension , , in the model introduced in the main text , we consider the cell wall as a thin elastic shell that deforms plastically when stretched beyond a maximum strain , the yield strain .a thin shell has two modes of elastic deformations , bending and stretching [ s ] , such that . in the limit of small thickness of the shell as compared to its radii of curvature , one can neglect the bending energy ( that scales as ) whereas the stretching energy is given by , where is the mechanical stress tensor and is the strain tensor . as yield strainis reached at the onset of growth , we have ( assuming isotropic stretching ) , where is the kronecker delta .furthermore , assuming a hookean constitutive relation for the stress tensor [ s ] \;,\ ] ] where is the young s modulus and is the poisson ratio , we have , where , for a gram - negative bacterial cell wall of thickness nm , elastic modulus mpa [ s ] and average yield strain , the wall tension is estimated to be nn m .while the actual value for the turgor pressure counteracting this tension can be contested , our choice for the numerical value for internal pressure can be justified using a simple mechanical argument .radial force - balance dictates that in order to maintain an average cross - sectional radius m , a cell wall with surface tension 50 nm/ m has to balance an internal pressure of magnitude 0.25 mpa , which is numerically very close to our choice for mpa .* cytoskeletal bundle mechanics*. next , we model the mechanical energy in the cell wall due to interactions with cytoskeletal proteins .our minimalist approach considers two crucial protein bundles that are directly responsible for maintaining the shape of _ c. crescentusmreb protein bundles contribute an effective energy , which favors a rod - like shape , and crescentin filament bundles contribute an energy , which favors a crescent - like shape .we thus have .mreb subunits form patchy filamentous bundles adherent to the cell wall and oriented perpendicular to the long axis of the cell [ s ] . the elastic energy stored in an adherent mreb subunitis given by where labels the subunit , is the circumferential radius of curvature of the cell wall where the subunit is attached , is the length of the subunit , is the intrinsic radius of curvature and is the bending rigidity of the associated mreb bundle .the total energy imparted by a collection of attached mreb subunits is given by .next , employing a continuum mean field assumption , we replace individual subunit lengths by their average length nm [ s ] and assume a uniform number density of mreb subunits in the cell surface to obtain where , is the effective bending modulus due to mreb induced traction forces .the bending rigidity of an mreb bundle is given by , where is the flexural rigidity of mreb filaments ( assumed to be similar to f - actin ) , is the number of mreb protofilaments per bundle , and is an exponent in the range 12 depending on the strength of crosslinking or bundling agents [ s ] .mreb filaments in a bundle appear to have strong lateral interaction with negligible filament sliding , so we assume .the diameter of an mreb protofilament is nm [ s ] and the average width of an mreb bundle has been determined from super - resolution imaging to be in the range 60 - 90 nm [ s ] , giving the estimate 15 - 22 .estimating the surface number density of mreb subunits as and using nn [ s ] , we obtain in the range 5.612.1 nn m .crescentin proteins form a cohesive bundled structure anchored to the sidewall of _ c. crescentus _ cells .the energetic contribution due to crescentin is given by where is the bending rigidity , is an arc - length parameter , is the longitudinal curvature of the cell wall and is the preferred radius of curvature of crescentin bundles . the bending rigidity of crescentin can be expressed as , where is the young s modulus and is the area moment of inertia of the bundle ( with width ) given by . since crescentin is an intermediate filament homologue , we assume that is similar to the young s modulus of intermediate filament bundles given by mpa [ s ] . assuming m , we estimate the bending rigidity of a crescentin bundle to be nn .* mean field model for cell shape and size dynamics*. as described in the main text , the total mechanical energy in the cell wall of _ c. crescentus _ cells can be given as a sum of contributions from internal pressure , wall surface tension , mechanical energy of interactions with bundles of cytoskeletal proteins such as crescentin and mreb , and the constriction energy during cell division .in the mean field description , we neglect the contributions from and disregard any spatial variations in cell geometry .the mean field description is a good approximation of the cell shape dynamics for , where the average width and the midline radius of curvature remains constant ( fig . 2 in the main text ) .we approximate the shape of _ c. crescentus _ cells as the segment of a torus with radius of curvature , cross - sectional width and spanning angle .we also neglect the pole caps that are mechanically rigid and do not remodel during wall growth .the total energy is then given by =-p ( \pi { \bar{w}}^2 r\theta)/4 + \gamma ( \pi r { \bar{w}}\theta ) + e_\text{width } + e_\text{cres}\;,\ ] ] where ^ 2 \;,\ ] ] ^ 2\;.\ ] ] from the above expressions , we see that the total energy has the scaling form =\theta u[{\bar{w}},r] ] . if denotes the outward unit normal vector on the centerline , the curves defining the upper and lower parts of the cell contour , , are given by the relation , , where represent the perpendicular distances of the top and bottom curves from the centerline .the total cell width is then given by .it is convenient to switch to polar coordinates , where the shape of the cell contour is given by the re - parametrized curve , where is the angular coordinate spanning the centerline , which can be approximated as the arc of a circle with radius .since the ratio is small at all times , one can approximate the local curvature as where prime denotes derivative with respect to and the subscripts represent the upper and the lower part of the cell contour respectively .furthermore , in the linear regime , the differential arc length can be approximated as .the dynamics of the shape parameters , , and are determined from the kinetic law in eq .( 1 ) of the main text .the instantaneous width profile results from minimizing the total energy functional , which leads to the following shape equation , where , is the tension on the cell contour and and are the linear force densities on the contour due to maintenance of width and the crescent shape , respectively . in the contour modelwe simplify the energetic contribution due to maintenance of width ( acting on the sections and ) as where the elastic constant depends on the bending rigidity introduced in the main text as .this linear approximation holds if . from our data and mean field modelfits we get .the resultant force density is given by .the bending energy induced by crescentin protein bundles anchored onto the cell wall ( regions ) is given by ^ 2\;,\ ] ] where is the spontaneous curvature of the crescentin bundle , and s is the arc - length parameter along the upper part of the cell contour ( ) which is related to as . to obtain the force density we consider an infinitesimal deformation , , of the upper contour as , where is the outward unit normal .accordingly the curvature and the differential arc - length changes , and , where and [ s ] .the resultant force density is obtained after variations of the energy functional , , where $ ] .this leads to the following non - linear force contribution : using eq ., can be linearized and expressed using the angular coordinate as \;.\ ] ] * compartmentalizing the cell contour*. ( 1 ) pole caps : the cell poles are assumed to be mechanically inert in the sense that they do not interact with the active cytoskeletal proteins such that .the mechanical forces acting on the cell poles come from turgor pressure , and the tension .the shape of the cell poles are then described by the two - dimensional laplace s law , where the superscripts and denote respectively the stalked and swarmer poles .\(2 ) upper cell contour ( ) : the region in the upper cell contour obeys the force - balance equation : at the endpoints of the segments we impose the boundary condition that the curvature must equal the longitudinal curvature of the poles .\(3 ) bottom cell contour ( ) : in the absence of crescentin , the region in the bottom cell contour obeys the force - balance equation : \(4 ) septal region : we incorporate the effect of constriction in the cell shape equation by imposing the constraint ( boundary condition ) that , where evolves according to the kinetics described in eq .( 6 ) and ( 7 ) of the main text .furthermore , the curvatures at the end points of the septal segments must conform to the curvature of the newly formed poles .subject to these boundary conditions , the width profile in the upper septal region ( ) is given by , with a surface tension ( ) of the newly formed poles chosen to be much higher than the peripheral region , .the contour below the centerline obeys the equation , the governing cell shape equation given in eq . , is then solved numerically in each part of the cell contour with matching boundary conditions in and its derivatives .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 | we investigate the intergenerational shape dynamics of single _ caulobacter crescentus _ cells using a novel combination of imaging techniques and theoretical modeling . we determine the dynamics of cell pole - to - pole lengths , cross - sectional widths , and medial curvatures from high accuracy measurements of cell contours . moreover , these shape parameters are determined for over 250 cells across approximately 10000 total generations , which affords high statistical precision . our data and model show that constriction is initiated early in the cell cycle and that its dynamics are controlled by the time scale of exponential longitudinal growth . based on our extensive and detailed growth and contour data , we develop a minimal mechanical model that quantitatively accounts for the cell shape dynamics and suggests that the asymmetric location of the division plane reflects the distinct mechanical properties of the stalked and swarmer poles . furthermore , we find that the asymmetry in the division plane location is inherited from the previous generation . we interpret these results in terms of the current molecular understanding of shape , growth , and division of _ c. crescentus_. cell shape both reflects and regulates biological function . the importance of cell shape is exemplified by bacteria , which rely on specific localization of structural proteins for spatiotemporal organization . bacteria take forms resembling spheres , spirals , rods , and crescents . these shapes are defined by cell walls consisting of networks of glycan strands cross - linked by peptide chains to form a thin peptidoglycan meshwork . super - resolution imaging is now revealing the internal positions of associated proteins . these include cytoskeletal proteins such as mreb , a homolog of actin , intermediate filament - like bundles of cres ( crescentin ) , and ftsz , a homolog of tubulin . however , due to the inherently stochastic nature of molecular processes , understanding how these proteins act collectively to exert mechanical stresses and modulate the effects of turgor pressure and other environmental factors requires complementary methods such as high - throughput , quantitative optical imaging . multigenerational imaging data for bacterial cells can now be obtained from microfluidic devices of various designs . still , a common limitation of most devices is that the environmental conditions change throughout the course of the experiment , particularly as geometric growth of the population results in crowding of the experimental imaging spaces . we previously addressed this issue by engineering a _ c. crescentus _ strain in which cell adhesion is switched on and off by a small molecule ( and inducible promoter ) , allowing measurements to be made in a simple microfluidic device . this technology allows imaging generations of growth of an identical set of 250500 single cells distributed over fields of view . thus cell density is low and remains constant . these studies afforded sufficient statistical precision to show that single _ c. crescentus _ cells grow exponentially in size and divide upon reaching a critical multiple ( .8 ) of their initial sizes . satisfaction of a series of scaling laws predicted by a simple stochastic model for exponential growth indicates that these dynamics can be characterized by a single time scale . in this paper , we use more advanced image analysis methods to extract cell shape contours from these data . the resulting geometric parameters , together with mathematical models , provide insights into growth and division in _ c. crescentus _ and the plausible role of cell wall mechanics and dynamics in these processes . specifically , we identify natural variables for tracking cell dynamics , and develop a minimal mechanical model that shows how longitudinal growth can arise from an isotropic pressure . we then examine the dynamics of cell constriction and unexpectedly find that it is governed by the same time constant as exponential growth . this important finding can be understood in terms of an intuitive geometric model that relates the constriction dynamics to the kinetics of the growth of septal cell wall . we further suggest that the site of constriction can arise from differences in materials properties of the poles and show that it is established in the previous generation i.e . , the location of the site of division can be predicted before formation of the divisome . we relate our results to the known dynamics of contributing molecular factors and existing models for bacterial growth and division . |
the last decade has seen the emergence of highly popular online social networks like myspace , orkut and facebook .myspace was founded in 2003 and it gained its peak popularity in 2008 .however , most of the users have started abandoning it since 2008 .cannarella and spechler have modified the sir model of disease spread to explain this phenomenon . the disease spread model has also been used to study the arrival and departure dynamics of the users in social networks . in this paper , we give an alternative explanation of the rise and fall of myspace . to begin with, we note that people join social networks due to the presence of their friends in these networks and leave them due to the inactivity of these friends .it is also known that a large number of the social network friendships are not active or strong friendships between the users , where a _ strong friendship _ between a pair of social network friends is indicated by regular communication between them .in fact , the well - known dunbar s number says that an individual can comfortably maintain a stable relationship with around 150 other people only .it means that the strong friendships are limited in a social network , even though there is no bound on the number of friends one can have in most social networks . in order to identify the strong friendships between people in a social network , onella et .al did an empirical study to establish that greater neighbourhood overlap between a pair of friends corresponds to stronger friendships between them .the _ neighbourhood overlap _ between a pair of users and in a network is defined as the number of nodes who are neighbours of both and divided by the number of nodes who are neighbours of at least one of them . sincestrong or active friendships are important for a user to remain in a social network , we study the change in the number of users who have at least a certain minimum number of strong friendships in an evolving social network created by the barabasi - albert random graph model , where we define a pair of users to be strong friends if their neighbourhood overlap is larger than a certain constant .we also study the change in the size of the largest connected component in the strong friendship subgraph of the evolving barabasi - albert random graph .this is motivated by the observation that the nodes in the core of a social network are more likely to survive than the nodes at the periphery .we consider the largest connected component in the strong friendship subgraph of a social network to be its _ core _ that is important to retain most of its users .our model is based on the well - known barabasi - albert model of social networks .the barabasi - albert model is an algorithm to generate random scale - free networks based on preferential attachment and growth .the preferential attachment is the property that a node with a higher degree is more likely to get connected to new nodes as the network grows . starting with an initial collection of connected nodes, the barabasi - albert algorithm adds one node at a time to the network .each new node is connected to existing nodes as follows .the probability that the new node is connected to node is , where is the degree of node and is the sum of the degrees of all the nodes in the current network .the degree distribution resulting out of barabasi - albert model follows power law .let be the initial graph with vertices , and be the random graph after nodes have been added by the barabasi - albert algorithm . for a constant , we define two users in to be strong friends if their neighbourhood overlap is more than . for a given , the strong friendship subgraph of is defined as follows .the graph contains the same set of nodes as , and a pair of nodes and are connected by an edge in if and only if they are connected by an edge in and their neighbourhood overlap w.r.t . is more than .we run our experiments with three different values of , the threshold for the strong friendship between a pair of users . in each of the cases , we start with a complete graph with vertices , where is the the number of nodes every new node is connected to . in figures[ fig1 ] , [ fig2 ] and [ fig3 ] , we plot the number of the users having at least ( and , respectively ) strong friends in against the size of the graph for a given and . in each of these plots, we observe that this number increases till a point before starting to decrease . in figures[ fig4 ] , [ fig5 ] and [ fig6 ] , we plot the size of the largest connected component ( lcc ) in against the size of the graph for a given and . in each of these plots , we observe that the size of the largest connected component increases till a point , before it reaches a peak and starts decreasing . also , the decline is sharper for higher values of .our observations indicate that one possible explanation for the fall of myspace is that many of the users started to abandon it after they had a few strong friendships left in the network .moreover , the remaining users found it difficult to survive as the core of the strong friendship subgraph started reducing in size .since this might be the effect of preferential attachment where a popular user is likely to befriend a large number of other users , it would be interesting to see whether the same observation does nt hold for random evolving networks that have a restriction on the number of friends each person can have .it is also an interesting open problem to estimate the size of the network where the plots turn downward , as a function of and , for either of the properties mentioned above .j. p. onnela , j. saramaki , j. hyvonen , g. szabo , d. lazer , k. kaski , j. kertesz , and a. l. barabasi .structure and tie strengths in mobile communication networks .usa , 104 : 7332 - 7336 , 2007 .s. wu , a. das sarma , a. fabrikant , s. lattanzi and a. tomkins .arrival and departure dynamics in social networks .proceedings of the sixth acm international conference on web search and data mining ( wsdm ) , 233 - 242 , 2013 . | the rise and fall of online social networks has recently generated an enormous amount of interest among people , both inside and outside of academia . gillette [ businessweek magazine , 2011 ] did a detailed analysis of myspace , which started losing its popularity since 2008 . recently , cannarella and spechler [ arxiv , 2014 ] used a model of disease spread to explain this rise and fall of myspace . in this paper , we give an alternative explanation for the same . our explanation is based on the well - known barabasi - albert model of generating random scale - free networks using preferential attachment mechanism . |
the intricate formation of the large scale structure of the present - day universe is formed by an interplay between random gaussian fluctuations and gravitational instability . when gravitational instabilities start dominating the dynamical evolution of the matter content of the universe , the structure formation evolves from a linear to a highly nonlinear regime . in this framework , voids are formed in minima and haloes are formed in maxima of the same primordial gaussian field , and later on these features present different type dynamical evolutions due to their initial conditions in the nonlinear regime .this fact has been known since early studies that showed voids are integral features of the universe . show that the distribution of voids can be affected by their environments . as a result, the void size distribution may play a crucial role to understand the dynamical processes affecting the structure formation of the universe .the early statistical models of void probability functions ( vpfs ) are based on the counts in randomly placed cells following the prescription of .apart from vpfs , the number density of voids is another key statistic to obtain the void distribution .recently show that void size distributions obtained from the cosmic void catalog ( cvc ) satisfy a -parameter log - normal probability function .this is particularly interesting , because observations and theoretical models based on numerical simulations of galaxy distributions show that the galaxy mass distribution satisfies a log - normal function rather than a gaussian .taking into account that voids are integral features of the universe , it may be expected to obtain a similar distribution profile for voids . apart from this , discuss a possible quantitative relation between the shape parameters of the void size distribution and the environmental affects . following up on the study of , we here extend their analysis of void size distributions to all simulated and mock samples of cvc of .the three main catalogs under study are dark matter , halo and galaxy catalogs .therefore , we confirm that the system of -parameter log - normal distribution obtained by provides a fairly satisfactory model of the size distribution of voids .in addition to this , we obtain equations which satisfy linear relations between maximum tree depth and the shape parameters of the void size distribution as proposed by .extending the study by , we here fully investigate the void size distribution function statistically in simulations and mocks catalogs of the public cvc of .it is useful to note that all the data of cvc are used here generated from a cold dark matter ( ) n - body simulation by using an adaptive treecode 2hot .in addition , in all data sets voids are identified with the modified version of the parameter - free void finder zobov .the data sets of cvc , we used here , can be categorized into three main groups ; * dark matter ( dm ) simulations are dm full , dm dense and dm sparse .although these dark matter simulations have the same cosmological parameters from the wilkinson microwave anisotropy probe ( wmap ) seven year data release ( dr ) as well as the same snapshot at z=0 , they have different tracer densities of , , and particles per which are respectively dm full , dm dense , and dm sparse .also the minimum effective void radii and mpc / h are obtained from the simulations for dm full , dm dense and dm sparse respectively .* halo catalog in which two halo populations are generated ; haloes dense and haloes sparse . in the halo catalog the halo positions are used as tracers to find voids .the minimum resolvable halo mass of haloes dense is while the minimum resolvable halo mass of haloes sparse data set is . in addition , the minimum effective void radii of haloes dense and sparse are mpc / h respectively . the main reason to construct these halo populations with different minimum resolvable halo masses is to compare the voids in halos to voids in relatively dense galaxy environments , see for more details . * galaxy catalogues; there are two galaxy mock catalogues which are produced from the above halo catalog by using the halo occupation distribution ( hod ) code of and the hod model by .these galaxy mock catalogs are called hod dense and hod sparse .the hod dense catalog has voids with effective minimum radii mpc / h and includes relatively high - resolution galaxy samples with density dark matter particles per cubic mpc / h matching the sloan digital sky survey ( sdss ) dr main sample using one set of parameters found by ( , , , ) .the hod sparse mock catalog consists of voids with effective minimum radii ( ) and this void catalog represents a relatively low resolution galaxy sample with density particles per cubic matching the number density and clustering of the sdss dr galaxy sample using the parameters found by ( , , , , and chosen to fit the mean number density ) .in addition to this another mock galaxy catalog is used here ; n - body mock catalog which is a single hod mock galaxy catalog in real space at , generated by a dark matter simulation of particles ( with a particle mass resolution in a gpc / h box and is tuned to sdss dr in full cubic volume by using the hod parameters found in and it consists of voids . although the n - body mock catalog is processed slightly differently than hod sparse and hod dense , it is a hod mock catalog and it uses planck first- year cosmological parameters . in the following section , we examine the above data sets from a statistical perspective such as histograms , parameters of location ( range , mean , median ) , mode or dispersion ( standard deviation ) and shape ( skewness , kurtosis ) by following the previous study of . from a statistical perspective , we also investigate the connection between the distribution and the environment of void populations .as a first step , the raw data plots of void size distributions are obtained for dm full , dm dense , dm sparse , haloes dense and haloes sparse .note that the void size distributions for hod dense , hod sparse and n - body mock data sets are discussed in great detail in . in the raw voidsize distributions , an unexpected local peak is observed around the value mpc / h in the dm full sample and in the dm dense sample around mpc / h , see fig 1 upper left and lower left panels . a similar behavior is observed by in the n - body mock sample around the value mpc / h .[ cols= " < , < " , ]here , extending our previous study of to attempt to find a universal void size distribution , we investigate the statistical properties of the void size distribution such as the shape parameters and their relations to the void environment of cvc by using the moment method by following . as aforementioned ,the moment method is easy to apply .therefore , we here confirm our previous result on the size distributions of voids which states that the -parameter log - normal distribution gives a satisfactory model of the size distribution of voids , which is obtained from simulation and mock catalogs of cvc ; n - body mock , dm full , dm dense , dm sparse , haloes dense , haloes sparse , hod sparse and hod dense ( see fig 6 , also fig 3 in ) . on the other hand , we should keep in mind that all the data sets of cvc are generated by a single n - body simulation that operates counting scales as nlogn .therefore the nature of these data sets may enforce us to obtain such a unique void size distribution . at this point being criticalis essential before stating that there is a universal void size distribution satisfying the -parameter log - normal . as a result , a thorough investigation of the void size distribution by using other voids catalogs to unveil the truth beyond the relation between the shape of the void size distribution and the void environment has a key importance .especially , taking into account that point out some problems and inconsistencies in cvc such as the identification of some overdense regions as voids in the galaxy data of the sdss dr .processing from the problems of the cvc , provide an alternative public catalogue of voids , obtained from using an improved version of the same watershed transform algorithm .therefore , it is essential to extend our analysis of void size distributions to the catalog given by .again , this is particularly important to confirm whether the -parameter log - normal void size distribution is valid in a different void catalog .if the 3-parameter log - normal distribution fits another simulated / mock void catalogues , then this may indicate that voids have universal ( redshift independent ) size distributions given by the log - normal probability function .apart from this , show that void average density profile can be represented by an empirical function in n - body simulations by using zobov .this function is universal across void size and redshift . following this , investigate the density profiles of voids which are identified by again using the zobov in mock luminous red galaxy catalogues from the jubilee simulation , and in void catalogues constructed from the sdss lrg and main galaxy samples . as a result , show that the scaled density profiles of real voids show a universal behavior over a wide range of galaxy luminosities , number densities and redshifts .processing from these results , there is a possibility that the -parameter log - normal void size distribution may be a universal distribution for voids in simulated as well as real data samples .that is why , it is critical to extend our analysis to other simulated as well as real data sets. we also observe that the number of nonzero and zero void central densities in the samples have important effects on the shape of the -parameter log - normal void size distributions . as is seen in table 1 and fig 1 ,if the percentage of number of nonzero central densities reaches in a simulated or mock sample in cvc , then a second population emerges in the void size distribution .this second population presents itself as a second peak in the log - normal size distribution , at larger radius . also , we here obtain a linear relation between the maximum tree depth and the skewness of the samples , and this relation is given by equation [ eqn : mtd - skew ] ( see fig 3 ) .this linear relation indicates that if there is a void in a simulated / mock sample with a high maximum tree depth , then we expect more skewed log - normal distribution .therefore , there is a direct correlation between the void substructure and the skewness of the void size distribution .the possibility of this relation is mentioned by .therefore , we here confirm that the skewness of a void size distribution is a good indicator of void substructures in a simulated / mock sample .aforementioned , the minimum radius cut of cvc samples defined by two density - based criterion as given by can affect the relation skewness - mtd .the minimum radius cut of cvc is particularly important because it may not only affect the resulting skewness of the data sets but also other shape parameters of the void size distributions , which can violate the confirmation of the log - normal distribution of the samples .that is why , it is important to study raw samples to understand the effect of the minimum radius - cut .in addition to skewness - mtd linear relation , another linear correlation is obtained between the maximum tree depth and the variance of the sparse samples of cvc ( see equation [ eqn : mtd - variance - sparse ] ) . as is seen from fig 4 ,sparse samples with high maximum tree depth tend to be more dispersed than a sparse sample with lower maximum tree depth . on the other hand , although we obtain a linear relation between the maximum tree depth and the variance of the dense samples ( see equation [ eqn : mtd - variance - dense ] ) , due to the lack of dense samples with merging tree depth , this relation does not provide enough information to define a relation between these parameters ( see fig 4 , red dotted line ) .but it is obvious that the relation between maximum tree depth and variance shows two different behaviors for the sparse and the dense samples .this is an expected result since variance is the indicator of dispersion by definition .while sparse samples are highly dispersed with high variance values , the dense samples are expected to show lower variance / dispersion than the sparse samples .these relations indicate that there is a direct correlation between the shape parameters of the void size distribution such as skewness , variance and the void substructures .our next goal is to address the following questions : is it possible to relate the shape parameters of the void size distribution to the environment in real data samples ? do the shape parameters change in time , indicating the dynamical evolution of the void size distribution ?is the -parameter log - normal void size distribution universal ?the authors would like to thank paul sutter and his team for constructing and sharing the cosmic void catalog .all void samples used here can be found in folder void catalog : 2014.06.08 at http://www.cosmicvoids.net .abazajian , k. n. 2009 , , 182 , 543 bernardeau , f. 1992 , , 392 , 1 bernardeau , f. 1994 , , 291 , 697 bouchet , f. r. , strauss , m. a. , & davis , m. et al .1993 , , 417 , 36 chincarini g. , & rood h. j. 1975 , nature , 257 , 294 coles , p. , & jones , b. 1991 , , 248 , 1 conover , w. j. 1980 , practical nonparametric statistics , 2nd ed ., new york , ny . , john wiley & sons conover , w. j. 1980 , practical nonparametric statistics , 3rd ed ., new york , ny . , john wiley & sons croton , d. j. , farrar , g. r. , & norberg , p. et al .2005 , , 356 , 1155 dawson , k. s. et al .2013 , , 145 , 10 einasto , j. , joeveer , m. , & saar e. 1980 , nature , 283 , 47 elizalde , e. , & gaztanaga , e. 1992 , , 254 , 247 fry , j. n. 1986 , , 306 , 358 goldberg , d. m. , & vogeley , m. s. 2004 , , 605 , 1 gregory s. a. , & thompson l. a. 1978 , , 222 , 784 hamilton , a. j. s. 1985 , , 292 , l35 hamaus , n. , sutter , p. m. , & wandelt , b. d. 2014 , physical review d , 112 , 25 hoyle , f. , rojas , r. r. , vogeley , m. s. , & brinkmann , j. et al .2005 , , 620 , 618 johnson , n.l , kotz , s , & balakrishnan , n. 1994 , continuous univariate distributions , vol . 1 ( 2nd ed ; wiley ) kayo , i. , taruya , a. , & suto , y. 2001 , , 561 , 22 kendall , m. , & stuart , a. 1977 , the advanced theory of statistics .vol 1 : distribution theory , 2nd ed . , ( new york , ny . , macmillan ) kofman , l. , bertschinger , e. , gelb , j. m. , nusser , a. , & dekel , a. 1994 , , 420 , 44 komatsu , e. et al .2011 , , 192 , 18 lavaux , g. , & wandelt , b. d. 2012 , , 754 , 109 manera , m. , et al .2013 , , 428 , 1036 neyrinck , m. c. 2008 , , 386 , 2101 nadathur , s. , & hotchkiss , s. 2014 , , 440 , 1248 nadathur , s. 2015 , , 449 , 3997 planck collaboration , 2014 , , 571 , a19 russell , e. 2013 , , 436 , 3525 russell , e. 2014 , , 438 , 1630 pycke , j .-, & russell , e. 2016 , , 821 , 110 sheth , r. k. , & van de weygaert , r. 2004 , , 350 , 517 sheskin , d. j. 2011 , handbook of parametric and nonparametric statistical procedures ( boca raton : fl , crc press ) strauss , m. a. 2002 , , 124 , 1810 sutter , p. m. , lavaux , g. , wandelt , b. d. , & weinberg , d. h. 2012 , , 761 , 44 sutter , p. m. , lavaux , g. , hamaus , n. , wandelt , b. d. , weinberg , d. h. , & warren , m. s. , 2014 , , 462 , 471 sutter , p. m. , lavaux , g. , wandelt , b. d. , weinberg , d. h. , warren , m. s. , & pisani , a. , 2014 , , 442 , 3127 taylor , a. n. , & watts , p.i. r. 2000 , , 314 , 92 tinker , j. l. , weinberg , d. h. , & zheng , z. , 2006 , , 368 , 85 van de weygaert r. , & platen e. 2011 , international journal of modern physics conference series , 1 , 41 , s. d. m. 1979 , , 186 , 145 zehavi , i. et al .2011 , , 736 , 59 zheng , z. , coil , a. l. , & zehavi , i. 2007 , , 667 , 760 | following up on previous studies , we here complete a full analysis of the void size distributions of the cosmic void catalog ( cvc ) based on three different simulation and mock catalogs ; dark matter , haloes and galaxies . based on this analysis , we attempt to answer two questions : is a -parameter log - normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments ? is there a direct relation between the shape parameters of the void size distribution and the environmental effects ? in an attempt to answer these questions , we here find that all void size distributions of these data samples satisfy the -parameter log - normal distribution whether the environment is dominated by dark matter , haloes or galaxies . in addition , the shape parameters of the -parameter log - normal void size distribution seem highly affected by environment , particularly existing substructures . therefore , we show two quantitative relations given by linear equations between the skewness and the maximum tree depth , and variance of the void size distribution and the maximum tree depth directly from the simulated data . in addition to this , we find that the percentage of the voids with nonzero central density in the data sets has a critical importance . if the number of voids with nonzero central densities reaches in a simulation / mock sample , then a second population is observed in the void size distributions . this second population emerges as a second peak in the log - normal void size distribution at larger radius . |
let be a sample of the variable set where is an indicator variable and is an explanatory variable .conditionally on , follows a bernoulli distribution with parameter .usual examples are response variables to a dose or to an expository time , economic indicators .the variable may be observed at fixed values , on a regular grid or at irregular fixed or random times , for a continuous process . exponential linear models with known link functionsare often used , especially the logistic regression model defined by with a parametric function .the inverse function of is easily estimated using maximum likelihood estimators of the parameters and many authors have studied confidence sets for the parameters and the quantiles of the model . in a nonparametricsetting and for discrete sampling design with several independent observations for each value of , the likelihood is written ^ { 1_{\{x_i = x_j\}}}.\ ] ] the maximum likelihood estimator of is the proportion of individuals with as , regular versions of this estimator are obtained by kernel smoothing or by projections on a regular basis of functions , especially if the variable is continuous .let denote a symmetric positive kernel with integral 1 , a bandwidth and , with as .a local maximum likelihood estimator of is defined as or by higher order polynomial approximations . under regularity conditions of and and ergodicity of the process , the estimator is -uniformly consistent and asymptotically gaussian .when is monotone , the estimators are asymptotically monotone in probability . for large , the inverse function then estimated by if is decreasing or by if is increasing .the estimator is also -uniformly consistent and asymptotically gaussian . for small samples ,a monotone version of using the greatest convex minorant or the smallest concave majorant algorithm may be used before defining a direct inverse .other nonparametric inverse functions have been defined . under bias sampling ,censoring or truncation , the distribution function of conditionally on is not always identifiable .the paper studies several cases and defines new estimators of conditional and marginal distributions , for a continuous bivariate set and for a conditional bernoulli variable .in case - control studies , individuals are not uniformly sampled in the population : for rare events , they are sampled so that the cases of interest ( individuals with ) are sufficiently represented in the sample but the proportion of cases in the sample differs from its proportion in the general population .let be the sampling indicator of individual in the global population and the distribution function of conditionally on is given by let for individual , is observed conditionally on and the conditional distribution function of is defined by the probability is deduced from and by the relation and the bias sampling is the model defined by is over - parameterized and only the function is identifiable .the proportion must therefore be known or estimated from a preliminary study before an estimation of the probability function . in the logistic regression model , ] .obviously , the bias sampling modifies the parameters of the model but not this model and the only stable parametric model is the logistic regression .let be the inverse of the proportion of cases in the population , under the bias sampling , is modified by the scale parameter : it becomes .the product may be directly estimated by maximization of the likelihood and in a discrete sampling design with several independent observations for fixed values of the variable , the likelihood is ^ { 1_{\{x_i = x_j \}}}\ ] ] and is estimated by for random observations of the variable , or for fixed observations without replications , is estimated by if is known , nonparametric estimators of are deduced as that is observed under a fixed truncation of : we assume that is observed only if ] .then and the conditional probabilities of sampling , given the status value , are if the ratio is known or otherwise estimated , the previous estimators may be used for the estimation of from the truncated sample with .for a random truncation interval ] and the integrals of are replaced by their expectation with respect to the distribution function of and and the estimation is similar .consider then a two - dimensional variable in a left - truncated transformation model : let denote a response to a continuous expository variable , up to a variable of individual variations independent of , with distribution function .the distribution function of conditionally on is defined by and the function is continuous .the joint and marginal distribution functions of and are denoted , with support , , with bounded support , and , such that and the observation of is supposed left - truncated by a variable independent of , with distribution function : and are observed conditionally on and none of the variables is observed if .denote for any distribution function and , under left - truncation , obviously , the mean of is biased under the truncation and a direct estimation of the conditional distribution function is of interest for the estimation of instead of the apparent mean .the function is also written with and the expressions ( [ ayx])-([byx ] ) of and imply that and .an estimator of is obtained as the product - limit estimator of based on estimators of and : for a sample , let in ] .let us denote , , and .[ cvanbn ] and , if , and converge in distribution to gaussian processes with mean zero , variances and respectively , and the covariances of the limiting processes are zero .the proof relies on an expansion of the form with and , where a similar approximation holds for .the biases and variances are deduced from those of each term and the weak convergences are established as in . from proposition [ cvanbn ] and applying the results of the nonparametric regression , [ cvfnmn ] the estimators , , converge -uniformly to , , , and converge -uniformly to and respectively .the weak convergence of the estimated distribution function of truncated survival data was proved in several papers . as in and by proposition[ cvanbn ] , their proof extends to their weak convergence on under the conditions and on , which are simply satisfied if for every in , and .[ cvfyx ] converges weakly to a centered gaussian process on .the variables , for every in , and converge weakly to and .if is supposed monotone with inverse function , is written and the quantiles of are defined by the inverse functions and of at fixed and , respectively , are defined by the equivalence between where is the inverse of at . finally ,if is increasing , is decreasing in and increasing in , and it is the same for its estimator , up to a random set of small probability. the thresholds and are estimated by as a consequence of theorem [ cvfyx ] and generalizing known results on quantiles for , converges -uniformly to on .for every and ( respect . ) , and converge weakly to the centered gaussian process ^{-1} ] .the variable is supposed left - truncated by and right - censored by a variable independent of .the notations and those of the joint and marginal distribution function of , and are in section [ ytruncation ] and is the distribution function of .the observations are , and , conditionally on .let the estimators are now written if is only right - truncated by independent of , with observations and conditionally on , the expressions , and are now written the distribution function and are both identifiable and their expression differs from the previous ones , the estimators are now if is left and right - truncated by variables and independent and independent of , the observations are , and , conditionally on , the distribution functions , and are identifiable , with defined by and .\end{aligned}\ ] ] their estimators are the other nonparametric estimators of the introduction and the results of section [ ytruncation ] generalize to all the estimators of this section .right and left - truncated distribution functions and the truncation distributions are estimated in a closed form by the solutions a self - consistency equation .the estimators still have asymptotically gaussian limits even with dependent truncation distributions , when the martingale theory for point processes does not apply .consider model ( [ model ] ) with an independent censoring variable for . for observations by intervals , only andthe indicators that belongs to the interval -\infty , c] ] are observed .the function is not directly identifiable and efficient estimators for and are maximum likelihood estimators .let and assume that is .conditionally on and , the log - likelihood of is and its derivatives with respect to and are for every s.t . and . with , then belongs to the tangent space for and the estimator of must be determined from the estimator of through the conditional probability function of the observations pinon , c. and pons , o. ( 2006 ) .nonparametric estimator of a quantile function for the probability of event with repeated data , in _ dependence in probability and statistics _ , _ lecture notes in statistics _ , * 187 * , pp . 475489 .springer , new york .pons , o.(2007 ) .estimation for the distribution function of one- and two - dimensional censored variables or sojourn times of markov renewal processes . _ communications in statistics - theory and methods _ ,* 36 * issue 14 , 25272542 . | efficient estimation under bias sampling , censoring or truncation is a difficult question which has been partially answered and the usual estimators are not always consistent . several biased designs are considered for models with variables where is an indicator and an explanatory variable , or for continuous variables . the identifiability of the models are discussed . new nonparametric estimators of the regression functions and conditional quantiles are proposed . |
information flow , or information transfer as it may be referred to in the literature , has long been recognized as the appropriate measure of causality between dynamical events .it possesses the needed asymmetry or directionalism for a cause - effect relation , and , moreover , provides a quantitative characterization of the otherwise statistical test , e.g. , the granger causality test .for this reason , the past decades have seen a surge of interest in this arena of research .measures of information flow proposed thus far include , for example , the time - delayed mutual information , transfer entropy , momentary information transfer , causation entropy , etc . , among which transfer entropy has been proved to be equivalent to granger causality up to a factor 2 for linear systems .recently , liang and kleeman find that the notion of information flow actually can be put on a rigorous footing within a given deterministic system .the basic idea can be best illustrated with a system of two components , say , and .the problem here essentially deals with how the marginal entropies of and , written respectively as and , evolve .take for an example. its evolution could be due to its own and/or caused by .that is to say , can be split exclusively into two parts : if we write the contribution from the former mechanism as and that from the latter as .this is the very time rate of information flowing from to . to find the information flow , it suffices to find , since , for each deterministic system , there is a liouville equation for the density of the state and , accordingly , can be obtained . in ref . , is acquired through an intuitive argument based on an entropy evolutionary law established therein .the same result is later on rigorously proved ; see for a review . for stochastic systems which we will be considering in this study , the trick ceases to work , but in liang manages to circumvent the difficulty and find the result , which we will be briefly reviewing in the following .consider a two - dimensional ( 2d ) stochastic system where is the vector of drift coefficients ( differentiable vector field ) , }} ] , and let the transition probability function ( pdf ) be , where stands for the vector of parameters to be estimated .so the log likelihood is as is usually large , the term can be dropped without causing much error .the transition pdf is , with the euler - bernstein approximation ( see ) , ^{1/2 } } \times e^{-\frac12 ( { { \bf x}}_{n+1 } - { { \bf x}}_n - { { \bf f}}{{\delta t}})^t ( { { \bf { b}}}{{\bf { b}}}^t{{\delta t}})^{-1 } ( { { \bf x}}_{n+1 } - { { \bf x}}_n - { { \bf f}}{{\delta t } } ) } , \end{aligned}\ ] ] where .this results in a log likelihood functional where and is the euler forward differencing approximation of : with .usually should be used to ensure accuracy , but in some cases of deterministic chaos and the sampling is at the highest resolution , one needs to choose . maximizing , we find that the maximizer satisfies the following algebraic equation : } } { { \left[\begin{array}{l } \hat f_1 \\\hat a_{11 } \\\hat a_{12 } \end{array}\right ] } } = { { \left[\begin{array}{l } \overline{\dot x_1 } \\ \overline{x_1\dot x_1 } \\\overline{x_2 \dot x_1 } \end{array}\right ] } } , \end{aligned}\ ] ] where the overline signifies sample mean .after some manipulations ( see ) , this yields the mle estimators : where are the sample covariances , and }^2\cr & = & \sum_{n=1}^n { \left [ ( \dot x_{1,n } - \overline{\dot x_{1,n } } ) - \hat a_{11 } ( x_{1,n } - \bar x_1 ) - \hat a_{12 } ( x_{2,n } - \bar x_2 ) \right]}^2 \cr & = & n ( c_{d1,d1 } + \hat a_{11}^2 c_{11 } + \hat a_{12}^2 c_{22 } - 2\hat a_{11 } c_{d1,1 } - 2\hat a_{12 } c_{d1,2 } + 2\hat a_{11 } \hat a_{12 } c_{12 } ) .\end{aligned}\ ] ] on the other hand , the population covariance matrix can be rather accurately estimated by the sample covariance matrix .so ( [ eq : dh1star_lin])-([eq : dh1noise_lin ] ) become as that in ref . with , here and should bear a hat , since they are the corresponding estimators .we abuse the notation a little bit to avoid notational complexity ; from now on they should be understood as their respective estimators .with these the normalizer is and hence we have the relative information flow from to : to the autoregressive process exemplified in the beginning . when , , the computed relative information flow rates are : clearly both are negligible in comparison to the contributions from the other processes in their respective series .this is in agreement with what one would conclude based on the absolute information flow computation and statistical testing . for the case , in which one may encounter difficulty due to the ambiguous small numbers , the computed relative information flow rates are : again they are essentially negligible , just as one would expect . on the other hand , when , , to , the influence from is large , contributing to more than 1/6 of the total entropy change .in contrast , the influence from to is negligible .it should be pointed out that the relative information flow , say , , makes sense only with respect to , since the comparison is within the series itself .here comes the following situation : for a two - way causal system with absolute information flows and of equal importance , their relative importances within their respective series could be quite different .for example , where and are identical independent normals ( (0,1 ) ) .initialize them with random values between [ 0,1 ] and generate 80000 data points on matlab .the computed information flow rates ( in nats per iteration ) which are almost the same .the relative information flows , however , are quite different : in terms of relative contribution in their respective series , the former is way more below the latter . generally speaking , the above imbalance is a rule , not an exception , reflecting the asymmetry of information flow .one may reasonably imagine that , in some extreme situation , a flow might be dominant while its counterpart is negligible within their respective series , although the two are of the same order in absolute value .el nio , also known as el nio - southern oscillation , or enso for short , is a long known and extensively studied climate mode in the tropical pacific ocean due to its relation to the global disasters like the droughts in southeast asia , southern africa , and northern australia , the floods in ecuador , the increasing number of typhoons , the death of birds and dolphins in peru , and the famine and epidemic diseases in far - flung regions of the world . a correct forecast of an el nio ( or its cold counterpart , la nia ) a few months earlier will not only help issue in - advance warnings of potential disastrous impacts , but also make the subsequent seasonal weather forecasting much easier .however , this aperiodic leading mode in the tropical pacific seems to be extraordinarily difficult to predict .a good example is the latest `` super el nio '' or `` monster el nio '' , which has been predicted to arrive in 2014 in a lot of portentous forecasts , turns out to be a computer artifact . for more reliable predictions, it is imperative to clarify the source of its uncertainty or unpredictability . in ref . , we have presented an application of eq .( [ eq : t21_est ] ) to the relation study between el nio and the indian ocean dipole ( iod ) , another major climate mode in the indian ocean , and found that the indian ocean is a source of uncertainty that keeps enso from being accurately predicted . since in that studythere is no relative importance assessment , we do not know whether the information flows , albeit significant , do weigh much in the modal variabilities .we hence redo the computation using the relative information flow formula ( [ eq : tau21 ] ) .we use for this study the same data as that used in , which include the nio4 index series and the sea surface temperature ( sst ) series downloaded from the noaa esrl physical sciences division , and the iod index namely dmi series from the jamstec site . shown in fig .[ fig : indian ] is the relative information flow rate from the indian ocean sst to el nio , . from itone can see that the information flow accounts for more than 10% of the uncertainties of nio4 , the maximum reaching 27% .this number is very large .besides , all the values are positive , indicating that the indian ocean sst functions to make el nio more uncertain .no wonder recently researchers find that assimilation of the indian ocean data helps the prediction of el nio ( e.g. , ) , although traditionally the indian ocean is mostly ignored in el nio modeling . besides the relative importance we have just obtained , fig .[ fig : indian ] also reveals some difference in structure from its counterpart , i.e. , the fig .5b of .a conspicuous difference is that now there are clearly two centers , residing on either side of indian .note this structure is different from the traditional dipolar pattern as one would expect ; here both centers are positive .this means that the northern indian ocean sst anomalies , both the positive phase and negative phase , as an integral entity influence the el nio variabilities , and , in particular , make el nio more unpredictable .the dipolar structure implies that most probably this entity is iod , not others like iobm ( indian ocean basin mode ; see ) . ., scaledwidth=60.0% ] to see more about this , we look at the information flow from the index dmi to the tropic pacific sst .the absolute rates are referred to the fig .4a of ; shown in fig .[ fig : pacific ] are .indeed the computed flow rates are significant , and all are positive .the largest , which occupies a large swathe of the equatorial region between through , reaches 10% .moreover , the structure reminds one of the el nio pattern .it is generally the same as that in its counter part , i.e. , fig .4a of , save for two changes : ( 1 ) the maximum center moves westward ; ( 2 ) the small center of a secondary maximum near at the equator disappears .this clear el nio - like structure attests to the above conjecture that iod is indeed a major source of uncertainty for the el nio forecast . ., scaledwidth=100.0% ] we have also computed the relative information flows from el nio to the indian ocean sst , and that from the pacific sst to the iod , using the same datasets .the results are also significant , though only approximately half as shown above .we now look at the causal relations between several financial time series . hereit is not our intention to conduct an financial economics research or study the market dynamics from an econophysical point of view ; our purpose is to demonstrate a brief application of the aforementioned formalism for time series analysis .nonetheless , this topic is indeed of interest to both physicists and economists in the field of macroscopic econophysics ; see , for example , .we pick nine stocks in the united states and download their daily prices from .these stocks are : msft ( microsoft corporation ) , aapl ( apple inc . ) , ibm ( international business machines corporation ) , intc ( intel corporation ) , ge ( general electric company ) , wmt ( wal - mart stores inc . ) , xom ( exxon mobil corporation ) , cvs ( cvs health corporation ) , f ( ford motor corporation ) . among these are high - tech companies ( msft , aapl , ibm , intc ) , retail trade companies[ e.g. , the drugstore chains ( cvs ) and discount stores ( wmt ) ] , automotive industry ( f ) , oil and gas industry ( xom ) , and the multinational conglomerate corporation ge which operates through the segments of energy , technology infrastructure , capital finance , etc . here by `` daily '' we mean on a trading day basis ,excluding , say , holidays and weekends .since stock prices are generally nonstationary , we check the series of daily return , i.e. , / p(t),\ ] ] or log - return where are the adjusted closing prices in the yahoo spreadsheet , and is one trading day .following most people we use the series of log - returns for our purpose .in fact , return and log - return series are approximately equivalent , particularly in the high - frequency regime , as indicated by .since the most recent stock msft started on march 13 , 1986 , all the series are chosen from that date through december 26 , 2014 , when we started to examine these series .this amounts to 7260 data points , and hence 7259 points for the log - return series . using eq .( [ eq : t21_est ] ) , we compute the information flows between the nine stocks and form a matrix of flow rates ; see table [ tab : stocks ] . a flow direction is represented with the matrix indices ; more specifically , it is from the row index to the column index .for example , listed at the location ( 2,4 ) is , i.e. , , the flow rate from apple to intel , while ( 4,2 ) stores the rate of the reverse flow , . also listed in the tableare the respective confidence intervals at the 90% level . from table [ tab : stocks ] , most of the information flow rates are significant at the 90% level , as highlighted .their values vary from 4 to 22 ( units : nats / day ; same below in this section ) .the maximum is , and second to it are and , both being .the influence of ibm to exxon is not a surprise , considering the dependence of the oil industry on high - tech equipments .the mutual causality between the retail stores wmt and cvs are also understandable .the information flow from cvs to ge could be through the sales of ge products ; after all , ge makes household appliances . for the rest in the table, they can be summarized from the following two aspects .* companies as sources .+ look at the table row by row .perhaps the most conspicuous feature is that the whole cvs row is significant .next to it is xom , with only three entries insignificant .that is to say , cvs has been found causal to all other stocks , though the causality magnitudes are yet to be assessed ( see below ) .this does make sense . as a chain of convenience stores ,cvs connects most of the general consumers and commodities and hence the corresponding industries . for xom , it is also understandable that why it makes a source of causality .oil or gas is for sure one of the most fundamental component in the american economy .* companies as recipients .+ examining column by column , the most outstanding stock is again cvs , with only one entry insignificant .that is to say , cvs is influenced by all other stocks except xom .following cvs is xom , wmt , and intc .the ibm and msft columns form the third tier .a few words regarding the stock f. as a cause to other stocks ( though causality maybe tiny ) , xom has not been identified to be causal to f. in fact , f has not been found causal to xom , either . this is a little surprising ; the reason(s ) can be found only after a careful analysis of ford , which is beyond the scope here .( in fact , computation does reveal information flows between xom and toyota . )interestingly , .this is easy to understand , as we rely on our motor vehicles to shop at wal - mart , while cvs stores could be just somewhere in the neighborhood !.the rates of absolute information flow between the 9 chosen stocks ( in nats per trading day ) . at each entrythe direction is from the row index to the column index of the matrix .also listed are the standard errors at a 90% significance level , and highlighted are the significant flows . [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] it should be noted that the causal relations generally change with time .if the series are long enough , we may look at how these information flows may vary from period to period . pick the pair ( ibm , ge ) as an example . for the duration ( march 1986 through present )considered above , , while is not significant .neither nor reaches 1% . since from the yahoo site both ge and ibmcan be dated back to january 2 , 1962 , we can extend the time series a lot up to 13338 data points . shown in fig .[ fig : ibm_ge]a are the series of their historic prices , and in fig .[ fig : ibm_ge]b and [ fig : ibm_ge ] c are the corresponding log - returns . computation of information flows with the whole series ( 13338 points ) results in and , and , nats / day , both being significant at the 90% level .this is very different from what are shown in tables [ tab : stocks ] and [ tab : stocks_norm ] , with the causal structure changed from a weak two - way causality to a stronger and more or less one - way causality .since in the above only the data of the recent 30 years are used , we expect that in the early years this causal structure could be much enhanced .choose the first 7000 points ( from january 1962 through november 1989 ) , the computed relative information flow rates are : where the units for the latter pair are in nats / day , same below in this section .further narrow down the period to 2250 - 3250 ( corresponding to the period 1971 - 1975 ) , then attaining the maximum of , in contrast to the insignificant flow in table [ tab : stocks ] .obviously , during this period , the causality can be viewed as one - way , i.e. , from ibm to ge .and the relative flow makes more than 5% , much larger than those in table [ tab : stocks_norm ] . the above remarkable causal structure for that particular period actually can trace its reason back in the history of ge .there is such a period in 1960 s when `` seven dwarfs '' ( burroughs , sperry rand , control data , honeywell , general electric , rca and ncr ) competed with ibm the giant for computer business , and , particularly , to build mainframes . in 1965, ge had only a 3.7-percent market share of the industry , though it was then dubbed as the `` king of the dwarfs '' , while ibm had 65.3% share .historically ge was once the largest computer user outside the us federal government ; it got into computer manufacturing to avoid dependency on others . and , indeed ,throughout the 60s , the causalities between ge and ibm are not significant . then , why , as time entered 70s , was the information flow from ibm to ge suddenly increased to its highest level ?it turned out that ge sold its computer division to honeywell in 1970 ; in the following years ( starting from 1971 ) , it relied much on the ibm products .this ge computer era , which has almost gone to oblivion , does substantiate the existence of a causation between ge and ibm , and , to be more precise , an essentially one - way causation from ibm to ge . in this sense, our formalism is well validated .to assess the importance of a flow of information from a series , say , , to another , say , , it needs to be normalized .the normalization can not follow a way as that in computing a correlation coefficient , since there is no such a theorem like the cauchy - schwarz inequality for it to base .getting down to the fundamentals , we were able to distinguish three types of mechanisms that contribute to the evolution of the marginal entropy of , , similarly there are three such quantities for , as schematized in fig .[ fig : schem_info_flow ] .we hence proposed that the normalization can be fulfilled as follows : obviously , a normalized flow tells its importance relative to other mechanisms within its own series .in other words , the two flows are normalized differently , echoing the property of asymmetry which makes information flow analysis distinctly different from those such as correlation analysis or mutual information analysis .the above normalizer can be accurately obtained in the framework of a dynamical system .when only two equi - distanced series , say , and , are given , with a linear model its constituents can be estimated as follows , in a nutshell , } , \\ & & t_{2\to1 } = \frac { c_{11}c_{12}c_{2,d1 } - c_{12}^2c_{1,d1 } } { c_{11}^2 c_{22 } - c_{11 } c_{12}^2 } , \end{aligned}\ ] ] where is the time stepsize , the sample covariance between and , the sample covariance between and , and ( ; but for chaotic series sampled at high resolution , may be needed ) .it should be noted that a relative information flow is for the comparison purpose within its own series .the two reverse flows between two series can only be compared in terms of absolute value , since they belong to different series . in this sense ,absolute and relative information flows should be examined simultaneously .this is clarified in the schematic diagram in fig .[ fig : schem_info_flow ] , and has been exemplified in the validations with two autoregressive processes .it is quite normal that two identical information flows may differ a lot in relative importance with respect to their own series , as testified in our realistic applications . in some extreme situation, a pair of equal flows may find one dominant but another negligible in their respective entropy balances .partly for demonstration and partly for verification , we have presented two applications .the first is a re - examination of the climate science problem previously studied in ref . . considering the fadeout of the recent portentous predictions of a `` super '' or `` monster '' el nio , we have particularly focused on the predictability of el nio .our result reconfirmed that the indian ocean sst is a source of uncertainty to the el nio prediction .we further clarified that the information flow from the indian ocean is mainly through the indian ocean dipole ( iod ) .another realistic problem we have examined regards the causation between a few randomly picked american stocks .it is shown that many flows ( and hence causalities ) , though significant at a 90% level , their respective importances relative to other mechanisms are mostly negligible .the resulting matrices of absolute and relative information flows provide us a pattern of causality mostly understandable using our common sense .for example , ford has a larger influence on wal - mart than on cvs because people rely on motor vehicles to shop at wal - mart , while cvs could be just somewhere within a walking distance .a particularly interesting case is that we have identified a strong one - way causality from ibm to ge during the early stage of these companies .this has revealed to us the story of `` seven dwarfs '' competing ibm the giant for computer market . in an erawhen this story has almost gone to oblivion ( one even can not find it from ge s website ) , and ge may have left us an impression that it never built any computers , let alone a series of mainframes , this finding is indeed remarkable . * acknowledgments .* this study was partially supported by jiangsu provincial government through the `` specially - appointed professor program '' ( jiangsu chair professorship ) to xsl , and by the national science foundation of china ( nsfc ) under grant no .41276032 .e.g. , k. hlav - schindler , m. palu , m. vejmelka , j. bhattacharya , physics reports , 441(1 ) , 1 - 46 ( 2007 ) ; j. pearl , _ causality : models , reasoning , and inference_. mit press , cambridge , ma , 2nd edition ( 2009 ) ; m. lungarella , k. ishiguro , y. kuniyoshi , n. otsu , international journal of bifurcation and chaos , 17(3 ) , 903 - 921 ( 2007 ) . | recently , a rigorous yet concise formula has been derived to evaluate the information flow , and hence the causality in a quantitative sense , between time series . to assess the importance of a resulting causality , it needs to be normalized . the normalization is achieved through distinguishing three types of fundamental mechanisms that govern the marginal entropy change of the flow recipient . a normalized or relative flow measures its importance relative to other mechanisms . in analyzing realistic series , both absolute and relative information flows need to be taken into account , since the normalizers for a pair of reverse flows belong to two different entropy balances ; it is quite normal that two identical flows may differ a lot in relative importance in their respective balances . we have reproduced these results with several autoregressive models . we have also shown applications to a climate change problem and a financial analysis problem . for the former , reconfirmed is the role of the indian ocean dipole as an uncertainty source to the el nio prediction . this might partly account for the unpredictability of certain aspects of el nio that has led to the recent portentous but spurious forecasts of the 2014 `` monster el nio '' . for the latter , an unusually strong one - way causality has been identified from ibm ( international business machines corporation ) to ge ( general electric company ) in their early era , revealing to us an old story , which has almost gone to oblivion , about `` seven dwarfs '' competing a giant for the mainframe computer market . |
in many real - life applications such as audio processing or medical image analysis , one encounters the situation when given observations ( most likely noisy ) have been generated by several sources that one wishes to reconstruct separately . in this case, the reconstruction problem can be understood as an inverse problem of unmixing type , where the solution consists of several ( two or more ) components of different nature , which have to be identified and separated . in mathematical terms , an unmixing problem can be stated as the solution of an equation where and but for in the sense that for all and in general , we are interested to acquire the minimal amount of information on so that we can selectively reconstruct with the best accuracy one of the components , but not necessarily also the other components for . in this settings , we further assume that can not be specifically tuned to recover but should be suited to gain universal information to recover by a specifically tuned decoder . a concrete example of this setting is the _ noise folding phenomenon _ arising in compressed sensing , related to noise in the signal that is eventually amplified by the measurement procedure . in this setting , it is reasonable to consider a model problem of the type where is the random gaussian noise with variance on the original signal and is the linear measurement matrix .several recent works ( see , for instance , and the references therein ) illustrate how the measurement process actually causes the noise folding phenomenon . to be more specific , one can show that ( [ model_problem ] ) is equivalent to solving where is composed by i.i.d .gaussian entries with distribution , and the variance is related to the variance of the original signal by .this implies that the variance of the noise on the original signal is amplified by a factor of . under the assumption that satisfies the so - called restricted isometry property , it is known from the work on the dantzig selector in that one can reconstruct from measurements as in ( [ model_problem_eq ] ) such that where denotes the number of nonzero elements of the solution the estimate ( [ recon_rate ] )is considered ( folklore ) nearly - optimal in the sense that no other method can really improve the asymptotic error therefore , the noise folding phenomenon may in practice significantly reduce the potential advantages of compressed sensing in terms of the trade - off between robustness and efficient compression ( given by the factor here ) compared to other more traditional subsampling methods .in the authors present a two - step numerical method which allows not only to recover the large entries of the original signal accurately , but also has enhanced properties in terms of support identification over simple -minimization based algorithms . in particular , because of the lack of separation between noise and reconstructed signal components , the latter ones can easily fail to recover the support when the support is not given a priori . however , the computational cost of the second phase of the procedure presented in , being a non - smooth and non - convex optimization problem , is too demanding to be performed on problems with realistic dimensionalities .it was also shown that other methods based on a different penalization of the signal and noise can lead to higher support detection rate .the follow up work , which addresses the noise folding scenario by means of multi - penalty regularization , provides the first numerical evidence of the superior performance of multi - penalty regularization compared to its single parameter counterparts for problem ( [ model_problem ] ) .in particular , the authors consider the functional here may all be considered as regularization parameters of the problem .the parameter ensures the of also with respect to the component . in the infinite dimensional setting the authors presented a numerical approach to the minimization of ( [ mp ] ) for , based on simple iterative thresholding steps , and analyzed its convergence .the results presented in this paper are very much inspired not only by the above - mentioned works in the signal processing and compressed sensing fields , but also by theoretical developments in sparsity - based regularization ( see and references therein ) and multi - penalty regularization ( , just to mention a few ) .while the latter two directions are considered separately in most of the literature , there have also been some efforts to understand regularization and convergence behavior for multiple parameters and functionals , especially for image analysis .however , to the best of our knowledge , the present paper is the first one providing a theoretical analysis of the multi - penalty regularization with a non - smooth sparsity promoting regularization term , and an explicit comparison with the single - parameter counterpart .in section 2 we concisely recall the pertinent features and concepts of multi - penalty and single - penalty regularization .we further show that -regularization can be considered as the limiting case of the multi - penalty one , and thus the theory of -regularization can be applied to multi - penalty setting . in section 3we recall and discuss conditions for exact support recovery in the single - parameter case .the main contributions of the paper are presented in sections 4 and 5 , where we extend and generalize the results from the previous sections to the multi - penalty setting . in section 5we also open the discussion on the set of admissible parameters for the exact support recovery for unmixing problem in single - parameter as well as multi - penalty cases .in particular , we study the sensitivity of the multi - penalty scheme with respect to the parameter choice .the theoretical findings and discussion are illustrated and supported by extensive numerical validation tests presented in section 6 .finally , in section 7 we compare the performance of the multi - penalty regularization and its single - parameter counterpart for compressive sensing problems .we first provide a short reminder and collect some definitions of the standard notation used in this paper .the true solution of the unmixing problem is called -sparse if it has at most non - zero entries , i.e. , , where denotes the support of .we propose to solve the unmixing problem with -sparse true solution using multi - penalty tikhonov regularization of the form the solution of which we will denote by .we note that we can , formally , interpret standard -regularization as the limiting case , setting obviously , the pair of minimizers of will always be equal to , where minimizes .let be fixed .we say that a set is a _ set of exact support recovery for the unmixing problem with operator _ , if there exists , such that whenever the given data has the form with . the parameters for which this property holds are called _admissible for . specifically , we will study for the sets and the corresponding class the set is a set of exact support recovery , if there exists some regularization parameter , such that we can apply multi - penalty regularization with parameters and ( or single - parameter -regularization with parameter in case ) to the unmixing problem and obtain a result with the correct support , provided that the -norm of the noise is smaller than and the non - zero coefficients of are larger than .typical examples of real - life signals that can be modeled by signals from the set can be found in asteroseismology , see for instance .we note that this class of signals is very similar to the one studied in .the main difference is that we focus on the case where the noise is bounded only componentwise ( that is , with respect to the -norm ) , whereas deals with noise that has a bounded -norm for some .additionally , we allow the noise also to mix with the signal to be identified in the sense that the supports of and may have a non - empty intersection .in contrast , the signal and the noise are assumed to be strictly separated in . throughout the paperwe will several times refer to the sign function , which we always interpret as being the set valued function if , if , and $ ] if , applied componentwise to the entries of the vector .we use the notation to denote the restriction of the operator to the span of the support of .additionally , we denote by the complement of , and by the restriction of to the span of .we note that the adjoints and are simply the compositions of the adjoint of with the projections onto the spans of and , respectively . as a first result, we show that the solution of the multi - penalty problem simultaneously solves a related single - penalty problem .[ le : single ] the pair solves ( [ eq : multi ] ) if and only if and solves the optimization problem with and we can solve the optimization problem in in two steps , first with respect to and then with respect to . assuming that is fixed , the optimality condition for in reads that is , for fixed , the optimum in with respect to is obtained at inserting this into the tikhonov functional , we obtain the optimization problem using , we can write thus the optimization problem for simplifies to now note that inserting this equality in , we obtain the optimization problem which is the same as . as a consequence of lemma [ le : single ], we can apply the theory of -regularization also to the multi - penalty setting we consider here .in particular , this yields , for fixed , estimates of the form provided that satisfies a source condition of the form with for every , and the restriction of the mapping to the span of the support of is injective ( see ) .additionally , it is easy to show that these conditions hold for provided that they hold for and is sufficiently large .the main focus of this paper is the question whether multi - penalty regularization allows for the exact recovery of the support of the true solution and how it compares to single - penalty regularization . because , as we have seen in lemma [ le : single ] , multi - penalty regularization can be rewritten as single - parameter regularization for the regularized operator and right hand side , we will first discuss recovery conditions in the single - parameter setting . in order to find conditions for exact support recovery , we first recall the necessary and sufficient optimality condition for -regularization : [ le : opt ] the vector minimizes if and only if using this result , we obtain a condition that guarantees exact support recovery for the single - penalty method : [ le : support ] we have , if and only if there exists such that this immediately follows from lemma [ le : opt ] by testing the optimality conditions on the vector given by for and else .our main result concerning support recovery for single - parameter regularization is the following : [ pr : cond_single ] assume that is injective and that then the set defined in is a set of exact support recovery for the unmixing problem whenever moreover , every parameter satisfying is admissible on .first we note that the injectivity of implies that the mapping is invertible .thus the condition actually makes sense .moreover , the inequality is necessary and sufficient for the existence of satisfying .now let and assume that satisfies .we denote and define because , it follows from the second inequality in that and therefore thus actually satisfies the equation and thus which is the first condition in lemma [ le : support ] .it remains to show that however , and thus now the first inequality in implies that this term is smaller than .thus satisfies the conditions of lemma [ le : support ] , and thus . in the case where is the identity operator ,the conditions above reduce to the conditions that and .since -regularization in this setting reduces to soft thresholding , these conditions are very natural and are actually both sufficient and necessary : since the noise may componentwise reach the value of , it is necessary to choose a regularization parameter of at least in order to remove it .however , on the support of the signal , the smallest values of the noisy signal value are at least of size .thus they are retained as long as the regularization parameter does not exceed this value . for more complicated operators , the situation is similar , i.e. , a too small regularization parameter is not able to remove all the noise , while a too large one destroys part of the signal as well .the exact bounds for the admissible regularization parameters , however , are much more complicated .we now consider the setting of multi - penalty regularization for the solution of the unmixing problem . applying lemma [ le : single ], we can treat multi - penalty regularization with the same methods as single - penalty regularization . to that end, we introduce the regularized operator in particular , we have with the notation of lemma [ le : support ] that and . as a first result, we obtain the following analogon to lemma [ le : support ] : [ le : support_multi ] we have , if and only if there exists such that applying lemma [ le : opt ] to the single - penalty problem , we obtain the conditions now the claim follows from the equalities since the proof of proposition [ pr : cond_single ] only depends on the optimality conditions and the representation of the data as , we immediately obtain a generalization of proposition [ pr : cond_single ] to the multi - penalty setting .[ pr : cond_multi ] assume that is such that then the set is a set of exact support recovery for the unmixing problem in the multi - penalty setting whenever moreover , all the pairs of parameter satisfying and are admissible on . the proof is analogous to the proof of proposition [ pr : cond_single ] .we note that the condition implies the analogous inequality for provided that is sufficiently large .similarly , if satisfies the conditions in proposition [ pr : cond_single ] that guarantee admissibility on , the pair will satisfy the conditions for admissibility in proposition [ pr : cond_multi ] provided that is sufficiently large . the converse , however , need not be true : if the pair is admissible for exact support recovery on with multi - penalty regularization , it need not be true that the single parameter is admissble for the single - penalty setting as well .examples where this actually happens can be found in section [ se : valid ] ( see in particular table [ tb : cond ] ) .as a consequence of propositions [ pr : cond_single ] and [ pr : cond_multi ] , we obtain that the condition is sufficient for to be a set of exact support recovery for the unmixing problem , provided that the ratio is sufficiently large ; the condition for the single - parameter case can be extracted from by setting , in which case reduces to .now define the signal - to - noise ratio of a pair as that is , is the ratio of the smallest significant value of the signal , and the largest value of the noise .denote moreover then the inequality implies that multi - penalty regularization with parameter allows for the recovery of the support of -sparse vectors from data provided the signal - to - noise ratio of the pair satisfies whenever the signal - to - noise ratio is larger than , we can recover the support of the vector with _ some _ regularization parameter .there are , however , upper and lower limits for the admissible parameters , given by inequality . in order to visualize them ,we consider instead the ratio defining and we then obtain the condition for exact support recovery .if the ratio is smaller than , then it can happen that some of the noise is not filtered out by the regularization method .on the other hand , if is larger than , then some parts of the signal might actually be lost because of the regularization .we note that the function is piecewise linear and concave , and .thus the region of admissible parameters defined by is a convex and unbounded polyhedron .moreover , we have that .additionally , we note that the behaviour of the function near infinity is determined by the term if this value is small , then the slope of the function for large values of is large , and thus the set of admissible parameter grows fast with increasing signal - to - noise ratio .if , on the other hand , is large , then the set of admissible parameters is relatively small even for large signal - to - noise ratio . thus can be reasonably interpreted as the sensitivity of multi - parameter regularization with respect to parameter choice .the larger is , the more precise the parameter has to be chosen in order to guarantee exact support recovery .the main motive behind the study and application of multi - penalty regularization is the problem that -regularization is often not capable to identify the support of signal correctly ( see and references therein ) . including the additional -regularization term , however, might lead to an improved performance in terms of support recovery , because we can expect that the -term takes care of all the small noise components . in order to verify this observation ,a series of numerical experiments was performed , in which we illustrate for which parameters and gaussian matrices the conditions for support recovery derived in the previous section were satisfied .in addition , we studied whether the inclusion of the -term indeed increases the performance . in a first set of experiments , we have generated a set of 20 gaussian random matrices of different sizes and have tested for each three - dimensional subspace spanned by the basis elements whether the condition is satisfied , first for the single - penalty case , and then for the multi - penalty case with different values of .the results for matrices of dimensions 30 times 60 and 40 times 80 , respectively , are summarized in table [ tb : cond ] and figure [ fi : cond ] .as to be expected from the bad numerical performance of -regularization in terms of support recovery , the inequality fails in a relatively large number of cases , especially when the discrepancy between the dimension of the vectors to be recovered and the number of measurements is quite large .for instance , in the case and , the condition most of the time failed for more than half of the three - dimensional subspaces .in contrast , the corresponding condition for multi - parameter regularization fails in the same situation only for about an eighth of the subspaces if , and in even fewer cases for . for other combinations of dimensionality of the problem and number of measurements ,the situation is similar . introducing the additional -penalty termalways allows for the exact support reconstruction on a larger number of subspaces than single - penalty regularization .additionally , the results indicate that the number of recoverable subspaces increases with decreasing ..[tb : cond ] percentage of 3-sparse subspaces for which the condition failed .the condition was tested on samples of 20 gaussian random matrices of dimensions 30 times 60 ( upper table ) and 40 times 80 ( lower table ) .other combinations of dimensionality and number of measurements showed qualitatively similar results . [ cols=">,^,^,^,^ " , ]the figure reports the results of two different decoding procedures of the same problem , where the circles represent the noisy signal and the crosses represent the original signal ._ upper figure : _ results with single - penalty regularization ._ lower figure : _ results with multy - penalty regularization .note that multi - penalty regularization allows for a better reconstruction of the support of the true signal . , title="fig:",scaledwidth=95.0% ] the figure reports the results of two different decoding procedures of the same problem , where the circles represent the noisy signal and the crosses represent the original signal ._ upper figure : _ results with single - penalty regularization . _lower figure : _ results with multy - penalty regularization .note that multi - penalty regularization allows for a better reconstruction of the support of the true signal ., title="fig:",scaledwidth=95.0% ] + | inspired by several real - life applications in audio processing and medical image analysis , where the quantity of interest is generated by several sources to be accurately modeled and separated , as well as by recent advances in regularization theory and optimization , we study the conditions on optimal support recovery in inverse problems of unmixing type by means of multi - penalty regularization . we consider and analyze a regularization functional composed of a data - fidelity term , where signal and noise are additively mixed , a non - smooth , convex , sparsity promoting term , and a quadratic penalty term to model the noise . we prove not only that the well - established theory for sparse recovery in the single parameter case can be translated to the multi - penalty settings , but we also demonstrate the enhanced properties of multi - penalty regularization in terms of support identification compared to sole -minimization . we additionally confirm and support the theoretical results by extensive numerical simulations , which give a statistics of robustness of the multi - penalty regularization scheme with respect to the single - parameter counterpart . eventually , we confirm a significant improvement in performance compared to standard -regularization for compressive sensing problems considered in our experiments . |
the fermi spacecraft supports two gamma - ray instruments ; the large area telescope ( lat ) and the gamma - ray burst monitor ( gbm ) .the lat is a wide - field gamma - ray telescope ( 20 mev - 300 gev ) that continuously scans the sky , providing all - sky coverage every two orbits .the gbm is an all - sky monitor ( 10 kev - 25 mev ) that detects transient events such as occultations and gamma - ray bursts ( grb ) .gbm detections of strong grbs can result in an autonomous re - point of the observatory to allow the lat to obtain afterglow observations .the satellite sends data to the ground every 3 hours .data is transferred via relay satellites at 40 mb / s to the white sands ground station .it then follows a leased line to the mission operations center ( moc ) at goddard space flight center where data is split into two parts and sent for science processing to both the gbm and the lat teams , the latter being located at stanford national accelerator laboaratory ( slac ) .the processing of downlinked satellite data is a time - critical operation .it is , therefore , necessary to automatically trigger the pipeline for each newly arrived block of data , and to exploit parallel processing in a batch farm to achieve the required latency for the production of the various data products .this processing is complex and is abstracted in a process graph .an xml representation of this process graph is interpreted by the pipeline to become a _ task_. once defined , a task is exercised by the creation of _ streams _ , each of which is one instance of the process graph and which consists of an arbitrary number of interconnected batch jobs ( or scripts ) known as _ process instances _ . to that end, the software is designed to be able to handle and monitor thousands of streams being processed at separate sites with a daily average throughput of about 1/2 cpu - year of processing . to date peak usage has been 45,000 streams in a single day and 167 cpu - years of processing in a single month .the pipeline was designed with the l1 data processing task ( `` level 1 processing '' , the core data processing of raw data that comes from the satellite ) , automatic science processing ( asp ) and monte carlo simulations as principle task types in mind and to ensure the tight connection to the fermi data catalog .it is literally impossible to depict the whole l1 task scheme in one single figure as it contains many dozens of sub - stream creations , dependencies and automatic re - run mechanisms .thus we refrain from including them in this paper . for detailsthe reader is referred to .instead we show the simple layout of a monte carlo task ( as our efforts initially are geared towards porting them to grid sites ) in fig .[ [ fig : mctask ] ] that does not rely on external dependencies such as local databases .this task simply consists of 3 steps , the generation of monte carlo data on a computing node , the transfer of the mc products to slac and their registration in the data catalog .the first 2 steps are batch operations where the registration step takes only split seconds and is achieved through running a dedicated jython scriptlet .we use a three tier architecture as shown in fig . [ [ fig : keytech ] ] comprising of back - end components , middle - ware and front - end user interfaces .we describe them more in detail below .core of the back - end components is the oracle database that stores all processing states .in addition we make extensive use of oracle technologies such as the scheduler that is used to run periodic jobs for system monitoring including resource monitoring of the oracle server itself .most quantities are made available to the user through trending plots that allow quick judgment about the state of the system along with its resource usage .another back - end component is the pipeline job control service . using remote method invocation ( rmi )they publish the uniform interface and communicate with the pipeline server .we discuss some more details on this component in the next section .to provide asynchronous persistent messaging from batch jobs to the pipeline server we use email messaging . at the beginning of a job an emailis sent detailing the host name and other worker node specific information .another email is sent to indicate that a job has finished .this email may also contain additional commands that invoke new pipeline commands , such as creation of sub - streams as the next step . in order to avoid overloading the slac email server , by relaying tens of thousands emails per day, we use a dedicated email server running the free apache james software .the pipeline server is the core of the system and contains two pools for threads , a worker and an admin pool . when a batch process is ready to run on one of the farms , a thread is allocated on the worker pool to perform the submission using the appropriate job control service .extensions to the pipeline ( called _ plugins _ ) can be used to add additional functionality , for example to provide access to experiment specific databases or to communicate with other middleware services without compromising the experiment independent design of the core pipeline software .plugins are written in java and loaded dynamically when the pipeline starts .the user can also provide jython scriptlets to run within the pipeline threads to perform simple calculations and to communicate with plugins .the fermi data catalog is implemented as a plugin .the pipeline server api allows queries for processes , stream management as well as means to get or set environment variables .the admin thread pool is used to identify work to be delegated to the worker pool .this includes gathering processes which are ready to run as well as various database queries .we provide a subset of the pipeline api as java management extension ( jmx ) , that provides a call interface to various user - interface applications .these come both as web interface and line command applications .the web interface provides password protected world - wide access to the pipeline and its control interfaces and allow simultaneous monitoring of tens of thousands of jobs in various states . for detailed technical information onthe pipeline components the reader is referred to . each job control service ( refer to fig . [[ fig : jcd ] ] ) implements job control and status methods that are specific to the batch or grid system . to thatend each job control service needs to provide the following commands : _submitjob , getjobstatus , killjob_. the code for the job control service is written in java and runs as a daemon on a dedicated service machine at each computing site .to date the pipeline supports lsf , bqs and more recently also condor as well as sun grid engine ( sge ) . atpresent uses lsf at slac and sge at the lyon computing center .the code can easily be adapted to any other desired batch system that follows the same job logic as the currently supported ones . to that end a java class _ batchjobcontrolservice _ and _ batchstatus _ need to be implemented for the desired new batch system , where _ batch _ denotes the system to be implemented . for the use with dirac ( distributed infrastructure with remote agent control ) the job control code wraps python scripts that provide the bridge in both format and language by using the dedicated dirac api .we describe further aspects of the dirac system and motivate our choice to use it in section [ dirac ] . in the pasta task was defined on top level to be handled by a specific job control service .recently the pipeline has been enabled to support multi - site tasks that allow the pipeline server to delegate the running of commands to our dirac job control service while we leave the transfer step to the lyon - based sge job control service .as of this writing the lat has been granted resources both at slac ( dynamic allocation scheme ) and lyon ( guaranteed 1200 cores allocation ) . at lyonall resources are used for monte carlo productions while at slac the total allocation is shared between l1 , monte carlo production and individual user jobs from the collaboration . as a recent challenge we have started reprocessing all of our data that was taken from the beginning of the mission up until now and are reprocessing it with our current state of knowledge about the experiment .the reprocessing requires a significant amount of our computing resources .as allocations are dynamic , users running science analysis may be directly impacted through less available slots on the batch farm .another recent challenge was a massive monte carlo production of proton runs that occupied our resources both at slac and lyon for several months .while not being particularly storage intensive , we do require significant amounts of cpu time .since this production run was setup with the previous interation of the instrument response , dubbed `` pass 6 '' , it is likely that simulation requests of this kind may be repeated as our knowledge of the experiment grows .it is thus important to investigate possibilities to extend our resources that can be utilized within the current pipeline framework .although we perform standard computing tasks at our two sites on local batch farms , there exists the virtual organization ( vo ) glast.org .this organization was founded in 2009 to provide access to glite resources granted by participating institutions in italy and france . at presentthe vo includes 13 sites that are partially enabled for use by .its use has however been limited to non - pipeline operations .most notably , our existing grid resources have been used for stand - alone pulsar blind searches and some large monte carlo simulations .these stand - alone tasks were unable to take advantage of the pipline s integration with the batch system and data catalog which made them more man - power intensive than their pipeline counterparts . in order to optimize the resource usage provided by the egi sites supporting the glast.org vo, we are exploring the use of the dirac system as potential connection between the pipeline and the grid resources .the dirac system , originally developed to support production activities of the lhcb experiment , is today a general solution to manage the distributed computing activities of several communities .one of its main components is the workload management system ( wms ) .the key feature of the dirac wms is the implementation of the ` pilot job ' mechanism , which is widely exploited by all lhc communities as a way to guarantee high success rate for user jobs ( workloads ) .more in detail , workloads are submitted to the dirac wms and inserted in the dirac _ central task queue_. the presence of workloads in the dirac _ central task queue _ triggers pilot jobs submission .pilot jobs are regular grid jobs that are submitted through the standard glite wms .once the pilot job gets on the worker node , it performs some environment checks and only in case these tests succeed , the workload is pulled from the dirac _ central task queue _ and executed . in case the environment checks fail or the pilot job gets aborted ( for example because of a mis - configured site ) , only the pilot job is affected .the net result of this mechanism is a significant improvement on the workload success rate . also , since the resources are pre - reserved by pilot jobs , the waiting time to get the workload execution started is reduced .while dirac has specifically been designed to tackle intrinsic grid inefficiencies through its pilot job concept , it can not solve some of the initial issues when being connected to the pipeline system .in particular : * the grid uses personalized certificates . when submitting jobs through the pipeline web interface , we submit them with a generic user i d , which is most feasible as the number of authorized pipeline users is small .* some of our sites , in particular those from infn , can not directly send emails to the pipeline server , thus rendering the standard communication of job status and task logic de - facto unusable . * in general large data on the gridwould be stored on grid storage elements ( grid se ) and require user intervention to download it once finished .this makes the automatic copy to our central slac xrootd space difficult to be used , in particular using the tight integration with the data catalog .we decided to implement an interface to the dirac system for several reasons : * independence of grid middleware : currently the vo comprises only glite sites but dirac is able to communicate with all common grid middleware platforms , thus making it less difficult to connect to other grid initiatives , such as the open science grid in the us . *additional monitoring functionality : dirac introduces a more detailed job monitor that in addition reports minor and major application statuses together with the overall job status , thus allowing easier debugging of tasks without the definite need to inspect log files manually .this task has always been among the most time consuming .we hope to achieve an improvement by using the new monitor , albeit as read - only system .the reason for that is that the pipeline i d and the grid i d are generally not the same and a re - submitted job on the pipeline keeps its i d while on the grid it is relaunched and assigned a new unique grid i d . * while is entering its 4th mission year , the number of developers for software has begun to dwindle .thus it is important to keep the required manpower as low as possible while maintaining full performance and possibly improving the capabilities of the pipeline . by using diracwe can build our interface on a system in active development with many possibilities to influence the development process to meet our needs . implementing the dirac interface comes in two steps , first the job control daemon as described in the previous section that wraps the basic pipeline commands through the dirac api and secondly the configuration of a dedicated dirac server .the implementation is shown in fig .[ [ fig : p2dirac ] ] .we make use of two of diracs core technologies : the pilot factory mechanism is used to renew proxies and authenticate the job control daemon to the grid allowing us to use one certificate retaining our previous user scheme .the dirac notification service provides means for the dirac server to communicate with grid worker nodes .this service is both safe to inception and can be modified to relay the content of our status emails to the dirac server that itself implements the email communication with the pipeline server .one typical by - product of running our code are more or less extensive log files .usually we have several hundreds of small log - files that each do not exceed a few mb . in the pastother experiments had to artificially enlarge their log files to ensure the stability of the grid ses due to journalling of small files .we can effectively circumvent this by declaring a local storage element at the computing center in lyon as a dedicated dirac se .these storage elements are addressed like normal grid elements but they do not need journaling to function . since this is a local se , we can view its content e.g. through a web server providing access to log - files that can directly be linked from the pipeline server . in conclusionwe believe that this solution provides an easy to implement and maintain interface to the pipeline system used for -lat .the existing vo resources of glast.org suggest the possibility to establish a new connection to grid services with our existing pipeline architecture .we mitigate issues such as the transfer and subsequent registration of data products at the fermi data catalog by using existing pipeline technologies .grid inefficiencies are handled by the dirac system that acts as a broker providing asynchronous communication with grid worker nodes and a closed mechanism to automatically renew proxies to use for grid operations .we leave handling of meta data to the existing pipeline technologies . as such, the pipeline software itself was designed in a manner to not contain any fermi specific functionality . through the plug - in feature of the middleware it is successfully used for data handling for the enriched xenon observatory or mc simulations for the cryogenic dark matter search experiment ( cdms ) , the upcoming cherenkov telescope array ( cta ) and the large synoptic survey telescope ( lsst ) , andis also being considered for use by other nasa missions .we acknowledge the ongoing generous support of slac ( us ) and in2p3 ( france ) as well as we thank them for increasing allocations at both sites .furthermore we are thankful for grid resources provided by infn and in2p3 .sz likes to thank the organizers for a stimulating meeting and particularly helpful discussions with stphane guillaume poss .the -lat collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the lat as well as scientific data analysis .these include the national aeronautics and space administration and the department of energy in the united states , the commissariat lenergie atomique and the centre national de la recherche scientifique / institut national de physique nuclaire et de physique des particules in france , the agenzia spaziale italiana and the istituto nazionale di fisica nucleare in italy , the ministry of education , culture , sports , science and technology ( mext ) , high energy accelerator research organization ( kek ) and japan aerospace exploration agency ( jaxa ) in japan , and the k. a. wallenberg foundation , the swedish research council and the swedish national space board in sweden .additional support for science analysis during the operations phase is gratefully acknowledged from the istituto nazionale di astrofisica in italy and the centre national dtudes spatiales in france .9 atwood et al. _ apj _ * 697 * ( 2009 ) http://cdms.berkeley.edu/ http://www.cta-observatory.org/ dubois , _ asp conf .ser . _ * 411 * , eds .d bohlender , d durand , & p dowler ( 2009 ) http://www.egi.eu/ http://www-project.slac.stanford.edu/exo/ flath et al . , _ asp conf .ser . _ * 411 * , eds .d bohlender , d durand , & p dowler , slac - pub-13549 ( 2009 ) see http://gammaray.msfc.nasa.gov/gbm/ for details on the gbm http://glite.cern.ch/ focke , `` implementation and performance of the fermi lat level 1 pipeline '' , ii .fermi symposium ( 2009 ) http://www.platform.com/workload-management/high-performance-computing http://james.apache.org http://www.lsst.org/lsst/ http://www.jython.org/ https://www.opensciencegrid.org http://java.sun.com/javase/technologies/core/basic/rmi/index.jsp http://www.oracle.com/us/sun/index.htm tsaregorodtsev at al . , _ j. phys ._ * 119 * ( 2008 ) | the data handling pipeline ( `` pipeline '' ) has been developed for the gamma - ray space telescope ( ) large area telescope ( lat ) which launched in june 2008 . since then it has been in use to completely automate the production of data quality monitoring quantities , reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the fermi science support center . aside from the reconstruction of raw data from the satellite ( _ level 1 _ ) , data reprocessing and various event - level analyses are also reasonably heavy loads on the pipeline and computing resources . these other loads , unlike level 1 , can run continuously for weeks or months at a time . in addition it receives heavy use in performing production monte carlo tasks . in daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download , typically completing the processing of the data before the next download arrives . the need for manual intervention has been reduced to less than 0.01% of submitted jobs . the pipeline software is written almost entirely in java and comprises several modules . the software comprises web - services that allow online monitoring and provides charts summarizing work flow aspects and performance information . the server supports communication with several batch systems such as lsf and bqs and recently also sun grid engine and condor . this is accomplished through dedicated job control services that for are running at slac and the other computing site involved in this large scale framework , the lyon computing center of in2p3 . while being different in the logic of a task , we evaluate a separate interface to the dirac system in order to communicate with egi sites to utilize grid resources , using dedicated grid optimized systems rather than developing our own . more recently the pipeline and its associated data catalog have been generalized for use by other experiments , and are currently being used by the enriched xenon observatory ( exo ) , cryogenic dark matter search ( cdms ) experiments as well as for monte carlo simulations for the future cherenkov telescope array ( cta ) . |
we are grateful to goffredo chirco , tommaso de lorenzo , alejandro perez , and carlo rovelli , for interesting discussions and useful comments . | in quantum statistical mechanics , equilibrium states have been shown to be the typical states for a system that is entangled with its environment , suggesting a possible identification between thermodynamic and von neumann entropies . in this paper , we investigate how the relaxation toward equilibrium is made possible through interactions that do not lead to significant exchange of energy , and argue for the validity of the second law of thermodynamics at the microscopic scale . the vast majority of phenomena we witness in every day life are irreversible . from the erosion of cliffs under the repeated onslaughts of the ocean to the erasure of our memories , we experience the unyielding flow of time . thermodynamics , originally the science of heat engines , allows us to predict simple macroscopic processes such as the melting down of an ice cube forgotten on the kitchen table , by the mean of two laws : the conservation of energy and the growth of entropy . among the wealth of things we are given to perceive , there are also a few phenomena showing a high degree of regularity . through the trajectories of planets and stars in the sky , through the swinging of a pendulum clock , we encounter the immutability of time . these led to the development of mechanics and time - reversible dynamical laws . this description of nature extends below atomic scale provided that quantum variables , represented by non - commuting operators , are introduced . the revolution initiated in the second half of the century by the founders of statistical mechanics was to understand that thermodynamics emerges from the microscopic structure of matter ; this idea had actually been suggested by d. bernoulli in _ hydrodynamica _ ( 1738 ) . + recently , a new approach to the foundations of statistical mechanics , sometimes called typicality , has been proposed . rather than the usual time or ensemble averages , this alternative viewpoint finds its roots in the genuinely quantum notion of entanglement . for a composite system in a pure state , each component , when taken apart , appears in a probabilistic mixture of quantum states , that is , even individual states can exhibit statistical properties . for instance , in the case of a system in contact with a heat bath , it has been shown that its density matrix is generically very close to a thermal state . this result has been extended to more complicated couplings between the system and its surrounding ( see for details and proofs ) . concretely , one considers a system and its environment , subjected to a constraint ( e.g. fixing the total energy ) . in terms of hilbert spaces , that means . if the number of degrees of freedom composing the system is small compared to the one of the environment , then for most ( in the sense of the haar measure on the unit sphere ) , one gets in other words , in the framework of typicality , the actual state of the system at some instant of time is very likely to resemble the equilibrium state . for that reason , it is legitimate to use the latter to estimate standard statistical quantities like expectation values of observables or von neumann entropy . however , from this perspective , the formulation of a second law of thermodynamics seems at first sight problematic . indeed , it is usually taken for granted that fine - grained entropy has to be conserved under the microscopic dynamics imposed by schrdinger s equation , and that thermalization may only be read from a growth in a coarse - grained entropy , such as in boltzmann s -theorem . in this work , we propose a very simple and practical resolution of this issue , based on the possibility of creating entanglement without exchanging energy ( cf . for alternative outlooks on this problem ) . for clarity , we will focus on a single example , prototype of all irreversible transformations , that is the joule - gay - lussac expansion . the solution suggested here is nonetheless much more general , and is thought to be relevant for all ordinary thermodynamic systems . + the celebrated experiment , initially studied by gay - lussac and used later by joule , has played a central role in the history of thermodynamics . it consists in the adiabatic free expansion of a gas , initially kept in one side of a thermally isolated container while the other side is empty ( see fig . [ joule - gay - lussac - exp ] ) . when the tap separating the two flasks is opened , the gas starts spreading out until homogeneity is achieved . if the internal energy functional does not depend on the volume , then the initial and final temperatures are found to be equal . the treatment of the joule - gay - lussac expansion for an ideal gas can be found in all textbooks of thermodynamics and is as follows : there is no exchange of heat nor work between the gas and the outside , and , while the accessible volume is doubled , the entropy of the gas , and therefore of the whole universe , increases by indicating the irreversibility of the process . on the other hand , it is also instructive to have a mechanical description of the joule - gay - lussac expansion . assuming for simplicity a monoatomic ideal gas , the dynamics of the particles is governed by the hamiltonian where is the potential keeping the particles inside the whole container . liouville s theorem then ensures the conservation of the volume - form induced by the symplectic structure , that is to say , there can be no loss of information whatsoever at the microscopic scale . shifting to quantum mechanics , the evolution is given by the unitary operator where is simply obtained by canonical quantization of the classical hamiltonian . assuming thermal equilibrium , the initial state for the gas reads where is the potential constraining the particles in one flask only , and is the inverse temperature at which the gas has been prepared . while the von neumann entropy of the gas initially matches with the thermodynamic one , it is inevitably preserved under the unitary evolution , namely the tension between the irreversible macroscopic thermodynamics and the reversible microscopic mechanics is manifest . our proposal is that an increase of von neumann entropy is actually allowed by a more careful analysis of what a thermally isolated system is . + consider a classical particle arriving with a momentum perpendicularly to a wall initially at rest . assuming an elastic collision , and given the large ratio between the masses of the wall and the particle , one obtains that after the bounce , the particle has a momentum , while the wall has acquired a momentum . the kinetic energy transferred to the wall being negligible , the energy of the particle is conserved over the bounce with a very good approximation . consequently , the classical dynamics of the gas is well described by the effective hamiltonian . from the point of view of quantum mechanics , the situation is more delicate . the total energy and momentum are also conserved during a rebound , so that the state for the composite system made of the particle and the wall evolves from to . by linearity , if the particle started in a superposition , then the unitary transformation associated to the bounce reads and , after tracing out the wall s degrees of freedom , the evolution of the state of the particle becomes which is , in general , not unitary . hence , the bounce is accompanied with a rise of von neumann entropy . the model presented above is of course very schematic , the physics of the interaction between the gas and the container , and the initial state of the latter , are in fact much more complicated . however , it already shows that such an interaction must affect their quantum correlations , and can lead to an increase of von neumann entropy for the gas ( see for a general result ) . in other words , at the quantum level , the energy distribution of the gas is conserved yet the evolution is not unitary , and the naive canonical quantization of does not strictly apply . it is now possible to derive the final state for the gas from its quantum mechanical description alone . the space of accessible states is such that the energy distribution for the gas remains unchanged ( for instance , ) . after a transient phase , characterized by a relaxation time , during which the gas and its environment get more entangled , the state of the universe can be seen as a generic unit vector in . following the conclusion of typicality , we find that the gas reaches a new equilibrium state , namely as a consequence , after a long - enough time , the variation of von neumann entropy of the gas matches the increase of entropy expected from thermodynamics . + let us now have a quick look at a slightly different experiment : the adiabatic reversible expansion of a gas pushing a piston . contrary to the joule - gay - lussac expansion , the temperature of the gas is decreasing as work is performed on the piston . during this process , the space of accessible states is changing , but its dimension remains constant since the size of the container and the thermal de broglie wavelength are increasing in the same way . later , if the piston is slowly pushed back , the gas reverts to its initial temperature . in this transformation , reversibility can not be understood as the possibility to return to the initial microstate , which is practically impossible as soon as a few particles are involved , but only as conservation of von neumann entropy . on the contrary , when a constraint initially applied to the system is suddenly relaxed , like in joule - gay - lussac s experiment , the space of accessible states is enlarged . the new states , of possibly higher entropy , can only be reached through interaction with the environment . however , this interaction does not need to involve significant energy transfer , the quantum system remains thermally isolated , and the growth of entanglement is responsible for the irreversibility of its evolution . + let us mention that the idea of varying entropy without exchanging energy has recently been suggested as a solution to the information loss paradox in black holes evaporation . more precisely , at the very last stage of its life , a black hole is believed to have lost almost all its mass in the form of hawking s radiation , leaving a planckian size object and a matter field in a thermal state , with an entropy proportional to the initial black hole area . hence , it has been argued that a huge amount of information should be encoded adiabatically into correlations with geometric degrees of freedom , in order to ensure purity of the final quantum state . + to summarize , the second law of thermodynamics , characterizing irreversible transformations , is usually considered at a mesoscopic level , by the mean of a coarse - grained entropy , while no information loss nor thermalization is expected to occur for the fundamental degrees of freedom . on the other hand , the framework of typicality attempts to make use of properties of quantum systems , specifically entanglement , as foundations for equilibrium statistical mechanics . in particular , it identifies thermodynamic and von neumann entropies . in that context , it is legitimate to raise the issue of thermalization , namely how an isolated system can reach the maximal entropy state . the observation that , for quantum systems , being thermodynamically isolated is qualitatively very different from being exactly isolated , opens the possibility of nontrivial quantum interactions without significant exchange of energy , and provides a physical ground for the second law at the microscopic scale . it is worth noting that recent work involving heat baths in squeezed states also points out the role of quantum correlations in thermodynamic processes . finally , this approach offers a fresh look at irreversibility . if the entropy of a system evolving unitarily has to be constant , it is not the case for those of its components . indeed , the nonextensivity of von neumann entropy allows for subsystems to see their entropy growing , due to a stronger entanglement between them , without any contradiction with the conservation of total information . that might be of relevance for explaining the arrow of time in cosmology : any region of space is constantly interacting with new degrees of freedom entering its past light cone , and , assuming a separable initial state , new quantum correlations can be established . this viewpoint may help us to understand the emergence , locally , of classical mixtures rather than highly - quantum states , and the increase of entropy for the observable universe . |
a branching random walk is a collection of points which , starting from a single point , diffuse and branch independently of the time , of their positions or of the other points , as in figure [ fig : brw ] . branching random walks appear in many contexts ranging from mathematics to biology .they can for example be used to describe how a growing population invades a new environment . in the one dimensional case ,see figure [ fig : brw ] , there is , at a given time , a rightmost individual at position , a second rightmost at and so on .( note that the rightmost at a time is not necessarily a descendant of the rightmost at time . )the expected position of the rightmost individual as well as the probability distribution of its position around are well understood ; the goal of the present paper is to describe the statistical properties of the positions of all the rightmost points in the system , in particular the distribution of the distances between the two rightmost points , the average density of points at some fixed distance from the rightmost , etc .one motivation for studying these distances is that the problem belongs to the broader context of extreme value statistics : trying to understand the statistical properties of the rightmost points in a random set of points on the line is a problem common to the studies of the largest eigenvalues of random matrices , of the extrema of random signals , or of the low lying states of some disordered systems such as spin glasses . in fact , the points generated after some time by a branching random walk can be viewed as the energies of the configurations of a directed polymer in a random medium , and the distances between the rightmost points as the gaps between the low lying energy states .the most studied example of branching random walk is the branching brownian motion : one starts with a single point at the origin which performs a brownian motion and branches at a given fixed rate ( right part of figure [ fig : brw ] ) .whenever a branching event occurs , the point is replaced by two new points which evolve themselves as two independent branching brownian motions .while the number of points generated after some time grows exponentially with time , the expected position of the rightmost point increases only linearly with time . in one dimension ,mc kean and bramson have shown that the probability distribution of the rightmost point is given by the traveling wave solution of the fisher - kpp equation , with a step initial condition . herewe will see that all the statistical properties of the rightmost points can be understood in terms of solutions to the fisher - kpp equation with appropriate initial conditions .we will also show that the distribution of the distances between these rightmost points has a long time limit which exhibits the striking property of superposability : the distances between the rightmost points of the union of two realizations of the branching brownian motion have the same statistics as those of a single realization .this paper is organized as follows : in section [ statistics ] we introduce some generating functions useful to study random sets of points on the line and show how one can use them to obtain all the properties of these random sets . in section [ sec : bbmfkpp ] we show that , for the branching brownian motion , all these generating functions are solutions of the fisher - kpp equation .we also show that the distribution of all the rightmost points as seen from or , alternatively , as seen from , has a long time limit which can be computed as the delay of fisher - kpp traveling waves .this distribution has the property of superposability . in section [ quantitative ] ,we present results , mostly numerical , on some specific aspects of the limiting distribution of points in the branching brownian motion , namely the distribution of the distance between the two rightmost points and the average density seen from the rightmost point . in section [ sec : disc ] we explain how the results on the branching brownian motion can be extended to more general branching random walks .finally , we study in section [ statmeas ] the distribution of all the rightmost points in a specific frame which depends on the realization and which was introduced by lalley and sellke .in this section , we introduce some useful quantities ( generating functions ) to characterize random sets of points on the line such that the number defined as is finite and vanishes for large enough . the first generating function one can define is from the knowledge of this function , one can extract the probability distribution function of the position of the -th rightmost point .indeed , by definition of , where is the probability that there are exactly points on the right of . one can notice that is the probability to have less than points on the right of .the generating function of these sums is , from , \lambda^2+\big[q_0(x)+q_1(x)+q_2(x)\big]\lambda^3+\cdots . \label{sumterms}\ ] ] but is also the probability that the -th rightmost point , if it exists , is on the left of .therefore , where is the probability that the -th rightmost point exists and is in the interval ] and ] is occupied by a point with probability and empty with probability , and the occupation numbers of disjoint intervals are uncorrelated .the probability that there are exactly points on the right of is given by from this , we obtain from ( [ defpsi],[exppsi ] ) and from ( [ defpsi2 ] ) in the poisson process : using , the generating function of the average between the -th and -th points is .\ ] ] the probability distribution function that the distance is equal to and the average density seen at a distance from the rightmost point are given by these expressions can be understood directly from the definition of the poisson process or , with a little more algebra , from ( [ p12],[rho ] ) . one can notice that and are given by the same expression with replaced by and are therefore analytic continuations of each other whenever is analytic . in the special case where the density of the poisson process is an exponential , one can simply replace in the previous expressions by .this gives = \exp \left [ -e^{-\alpha \big(x - \frac{\ln(1- \lambda)}\alpha\big)}\right ] , \\\psi_{\lambda\mu}(x , y ) & = \exp \left [ - \mu ( 1-\lambda ) \frac{e^{-\alpha x}}\alpha - ( 1-\mu ) \frac{e^{-\alpha y}}\alpha \right ] , \end{aligned } \label{a1}\ ] ] so that from ( [ psid ] ) and thus one also has from start with a collection of points , distributed according to some measure and , independently for each point , replace it by a realization of another measure shifted by .we say that the points are _ decorated _ by the measure and call the resulting measure as decorated by .we assume that and are such that the decorated measure has a rightmost point .if the functions , , for the measure are known , the decorated measure is characterized by functions , , given by where the average is over all realizations of the measure .for instance ,if is a poisson process of density , then = \exp \left [ \int \big [ \psi_{\lambda}(x - u ) -1\big ] r(u ) \ ,\diffd u \right ] , \\\psi_{\lambda\mu}(x , y)= \exp \left [ \int \big [ \psi_{\lambda\mu}(x - u , y - u ) -1\big ] r(u ) \ , \diffd u \right ] .\end{gathered } \label{psi}\ ] ] for a decorated measure where the decoration is a poisson process of density , the average over the s in leads in general to complicated expressions for or .the expressions for and are however the same as in for the pure poisson process of density .in fact , all the statistical properties of the distances between the rightmost points are the same as those in the exponential poisson process .this can be understood from the following reason : decorating the points by independent realizations of a poisson process of density is equivalent to drawing a single realization of a poisson process of density , which is just the same as one realization of a poisson process of density shifted by the random variable .the same argument applies to ruelle cascades , which can be defined as follows : take an increasing sequence of positive numbers and start with a poisson process of density . at each step , each point in the system is decorated by a poisson process of density . at step , the measure of points in the system is simply , from the previous argument , a poisson process of density globally shifted by a random variable which depends on the positions of the points at step .therefore , the statistics of the distances of the rightmost points is the same as for the poisson process of density .we are now going to see how the generating functions ( [ defpsi],[defpsi2],[defpsi3 ] ) can be determined when the random set of points on the line are the points generated at time by a branching brownian motion . to define the branching brownian motionwe start at time with a single point at the origin .this point diffuses and branches , and its offspring do the same . after some time ,a realization of the process consists of a finite number of points located at positions for then , during the next time interval , each point , independently of what the others do , moves a random distance with and , and , with probability , is replaced by two new points located at the same position . for any function one can define the generating function by \right\rangle , \label{dual}\ ] ] where the for are the positions of the points of the branching brownian motion at time and denotes an average over all the possible realizations . by analyzing what happens during the very first time interval , one can see that the evolution of satisfies the first term in the right hand side represents the motion of the initial point during the first time interval and the second term represents the branching event which occurs with probability during this first time interval .taking to zero , one gets which is the fisher - kpp equation .( the fisher - kpp equation is often written as , but this is identical to by the change of variable . ) because there is a single point at the origin at time , the initial condition is simply , from , the generating function ( [ defpsi3 ] ) at time can be written , for , as \right\rangle = h_\phi(x , t ) , \label{psi - h}\ ] ] where the function is given by and where is the heaviside step function defined by see figure [ figphi ] for the general shape of . . ] with the choice ( [ choicef ] ) of , the generating function ( [ defpsi3 ] ) and , therefore , all the properties of the point measure in the branching brownian motion at time can be obtained as solutions of the fisher - kpp equation with the initial condition ( [ initcond ] ) . in the special case and of , i.e. for the initial condition , one gets and one recovers the well - known fact that the solution of the fisher - kpp equation with a step initial condition is the cumulative distribution function of the position of the rightmost point . in section[ quantitative ] we will choose and , other special cases of , given by to calculate the generating functions ( [ defpsi],[defpsi2 ] ) at time needed to determine the distribution and the density defined at the end of section [ statistics ] .the fisher - kpp equation has two homogeneous solutions : , which is unstable , and , which is stable .when the initial condition is given by the step function , see , the solution of ( [ fkpph ] ) becomes a traveling wave with the phase invading the phase . as the front is an extended object, one can define its position in several ways ; for example one could define as the solution of for some . hereit will be convenient to use the following definition one can see using ( [ hprob ] ) that defined by ( [ defmt ] ) is the average position of the rightmost point .if the initial condition ( [ initcond ] ) is not a step function but is such that for all large enough and is a constant smaller than 1 for all large negative , as in ( [ choicef],[choicefsimple ] ) , the solution of ( [ fkpph ] ) becomes also a traveling wave .its position can be defined as in ( [ defmt ] ) by we are now going to show that the whole measure seen from the rightmost point can be written in terms of this position : one can rewrite ( [ the - frame ] ) as then from and ( [ defmtbis ] ) one gets where is the function with .therefore , _ with the definition ( [ defmtbis ] ) of the position of the front _ , the whole information about the measure in the frame of the rightmost point , at any time , can be extracted from the dependence of in the long time limit , it is known that the traveling wave solution of ( [ fkpph ] ) , with the initial condition ( [ teta ] ) , takes an asymptotic shape , .this means that { } f(x ) , \label{hf2}\ ] ] where satisfies it is also known , since the work of bramson , that , in the long time limit , the traveling wave moves at a velocity 2 and that its position ( [ defmt ] ) is given by if the function is not the step function but is of the form ( [ choicef],[choicefsimple ] ) , the solution of ( [ fkpph ] ) becomes also a traveling wave with the same shape .this wave is centered around the position , defined in ( [ defmtbis ] ) , and one has { } f(x ) .\label{hmphif}\ ] ] for large times is still given by ( [ bramson ] ) , but with a different constant .this means that { } f[\phi ] , \label{defdelay}\ ] ] where ] .the measure of ( the rightmost points in the branching brownian motion seen from the frame ) also has a well - defined limit when . indeed , using ( [ the - frame-2 ] ) and, one gets { } 1 - \left ( \partial_{z_1 } + \cdots + \partial_{z_{k } } \right ) f[\phi ] .\label{the - frame-3}\ ] ] therefore , in the long time limit , all the information on the distribution of the rightmost points seen from is contained in the dependence of the delay ] in or ( [ the - frame-3 ] ) depends only on : it would not change if we had chosen another definition of the front position . let us now consider independent branching brownian motions starting at at positions . following the same argument as in section [ sec : fkpp ] , the generating function ( [ defpsi3 ] ) of the union of the points at time of these branching brownian motions is given by the following generalization of ( [ psi - h ] ) where is the same solution of ( [ fkpph ] ) with the initial condition ( [ choicef ] ) as in the case of a single branching brownian motion starting at the origin .in the long time limit , using , { } \prod_{\alpha=1}^m f(x+ f[\phi]-u_\alpha ) .\label{stat2}\ ] ] this means that here again , there is a limiting measure when for the rightmost points in the frame .this measure is not the same as before ( when one starts with a single point at the origin ) , as can be seen by comparing and . in particular, the distribution of the rightmost point is different . in the frame of the rightmost point, however , one can see using and ( [ the - frame-2-bis ] ) that { } 1 - \left ( \partial_{z_1 } + \cdots + \partial_{z_{k } } \right ) f[\phi],\ ] ] as in .it is remarkable that the generating function does depend neither on the number of starting points nor on their positions .the picture which emerges is that if we superpose the rightmost points of several branching brownian motions , starting at arbitrary positions , the limiting measure in the frame of the rightmost point is , when , the same as for a single branching brownian motion .we will say that , in the long time limit , the measure of the distances between the rightmost points in a branching brownian motion becomes _ superposable _ : the union of two ( or more ) realizations of the process ( even moved by arbitrary translations ) leads to the same measure in the frame of the rightmost point as for a single branching brownian motion . as a remark , it is easy to check , that the poisson process with an exponential density , see section [ sec : pppexp ] , is an example of a superposable measure : the superposition of such poisson processes translated by arbitrary amounts is identical to a single poisson process with an exponential distribution translated by .one can also check that , for the same reason , all the decorated measures of section [ sec : deco ] are superposable when is a poisson process with an exponential density . in section [ statmeas ], we will state a stronger version of the superposability property of the branching brownian motion .in this section we obtain , by integrating numerically the equation ( [ fkpph ] ) with the appropriate initial condition , some statistical properties of the limiting measure seen from the rightmost point .the analytic calculation of the delay ] where is the standard fisher - kpp front with the step initial condition ( it is also easy to see from the definition of . ) then writing at time that is solution of the fisher - kpp equation , and using the initial condition in , one gets then , from one gets figure [ fig : distrib_d ] shows our numerical result for the distribution of the distance between the two rightmost points in the long time limit .more details on our numerical procedure is given in appendix [ sec : numeric ] .of observing a distance between the two rightmost points in the limit , as a function of .for small ( left part ) , the distribution is very close to . for larger values of , one observes a faster exponential decay of order .,title="fig : " ] of observing a distance between the two rightmost points in the limit , as a function of .for small ( left part ) , the distribution is very close to . for larger values of , one observes a faster exponential decay of order .,title="fig : " ] we see that is very close to for the values of which have a significant probability of occurring .this is of course consistent with an average distance ( [ valuesdn ] ) close to . for large ( events with a small probability ) , however , the exponential decay is faster .we now present a simple argument leading to the following prediction , which is consistent with our numerical data , in the long time limit , the right frontier of the branching brownian motion moves at velocity .let us assume that a large distance between the two rightmost points is produced by the following scenario : by a rare fluctuation , the rightmost point escapes and , without branching , goes significantly ahead while the rest of the points go on as usual , forming a frontier moving at velocity .such an event leads to the distance between the two rightmost points if , during a time , the rightmost point moves ( by diffusion alone ) by a distance without branching .the probability of such a scenario is \times e^{-\tau}. \label{prob1}\ ] ] the first term is the probability of diffusing over a distance during time , and the second term is the probability of not branching .the probability to observe a large distance is then dominated by the events with chosen to maximize , that is and this leads to ( [ decayratedist2 ] ) in good agreement with the numerical data of figure [ fig : distrib_d ] .there is a remarkable relation between the decay rate in ( [ decayratedist2 ] ) and the shape of the traveling wave solution of ( [ deff2 ] ) . aroundthe _ stable _ region , the equation can be linearized and one has we emphasize that this is a linear analysis of the _ stable _ region , which is usually uninteresting ( in contrast to the _ unstable _ region which determines the velocity ) .the solutions for are is the correct root as it is the only positive solution and has to vanish .the other solution ( the wrong root ) coincides ( up to the sign ) with the decay rate of the distribution for the distance between the two rightmost points ( [ decayratedist2 ] ) .as explained in appendix [ sec : largedeviation ] , this coincidence exists in a broad class of branching processes : each variant of the branching brownian motion is linked to a variant of the fisher - kpp equation , and the wrong root in the linear analysis of the stable region always gives the asymptotic decay rate of . to obtain the average density of points at a distance on the left of the rightmost point, one needs , according to , to calculate for close to 1 . as in section[ sec : distances ] , we first remark , from the definition , that =h_\theta(x , t) ] is the same as in . by choosing ( the step function ) , reduces to . by choosing as in, one sees from that the distribution of points at the right of the branching brownian motion conditioned by reaches a long time limit where only appears through the global shift .this means that at large times , the distribution of the rightmost points in a branching brownian motion has a well defined measure _ independent of _ located around . as an example , if one chooses the function defined by , one can easily show from and that , in the frame , the average density of points at any position is infinite in the long time limit . if one considers two branching brownian motions and starting at arbitrary positions , then the points in at large time will be characterized by a random value and a realization of the point measure described by ; idem for the points in . if one considers the union of these two branching brownian motions , one gets from }}\big ) = \exp\big(- e^{-[x-\ln(az)+f[\phi]]}\big),\ ] ] with means that the point measure reached in the long time limit in the frame is the same whether one started initially with one , two or , by extension , any finite number of initial points at arbitrary positions on the line .what does depend on the initial number of points is only the law of the random number , not the positions around .this is to be related to the discussion in section [ superpos1 ] , where we showed that , in the long time limit , the measure seen from depends on the initial number of points while the measure seen from does not .furthermore , the large time measure of the points in the frame has the following property : ( think of as the offspring of in the ] , see , and have therefore the same large time expansion as .thus , we extrapolated our numerical data to the large time limit by fitting it with the function for times larger than ( typically ) 5000 , see figure [ fit ] , and by using as the end result . by the function ( line ) .the inset shows the quality of the fit by displaying the ratio between the data points and the fitting function . ] on figure [ fig : distrib_d ] , the three data points were presented together ; on figure [ fig : density ] , we have drawn together for each data set the function using in each case the value of of table [ tabval ] . in both cases ,the superposition was nearly perfect , and so we expect that on the scales of the figure , the curves would not change noticeably for smaller values of and .in this appendix we generalize , to any branching random walk , the argument leading to the asymptotic decay of the distribution of the distance between the two rightmost points in the branching brownian motion .we consider a generic branching random walk in discrete space ( with spacing ) and time ( with intervals ) defined by the following family of functions we assume that , so that there is no extinction .then can be thought as the probability that the point does not branch but moves by a distance . the continuous time and/or space casescan be obtained as suitable and/or limits .let ] is the delay function . for , reduces to .we first give an outline of lalley s and sellke s proof applied to the case .given the positions of the points at time , the system as time can be seen as a collection of independent branching brownian motions at time starting from the . therefore \big| \{x_i(s)\}\big\rangle = \prod_i h_\phi\big(x - x_i(s),t - s\big),\ ] ] where the product in the right hand side is made on all the points present at time .we replace by , to center around the position of the front , and suppose large .it is easy to see from bramson s formula that as becomes large , so that \big| \{x_i(s)\}\big\rangle = \prod_i h_\phi\big(m_{t - s}+2s + x - x_i(s)+{o}(1),t - s\big),\ ] ] and , using , \big| \{x_i(s)\}\big\rangle = \prod_i f\big(2s+x - x_i(s)+f[\phi]\big).\ ] ] we now take large .of all the points present at time , the rightmost is around , see .therefore , diverges for all .using , \big| \{x_i(s)\}\big\rangle \simeq\exp\big(-\sum_i \big[a\big(2s+x - x_i(s)+f[\phi]\big)+b\big]e^{-2s - x+x_i(s)-f[\phi ] } \big).\ ] ] following lalley and sellke , we introduce the quantities ^{-2s+x_i(s)},\ ] ] see , so that \big| \{x_i(s)\}\big\rangle \simeq\exp\left(- \left[a z_s + \big(ax+af[\phi]+b\big)y_s\right]e^{-x - f[\phi]}\right ) .\label{lstls}\ ] ] finally , the most technical part of lalley s and sellke s proof is that and are martingales converging when to and respectively , which leads to .we do not reproduce this part of the proof here as it does not concern our extension with the function and it works in exactly as in . in ( [ ls2 ] ), the average is made on all the realizations with a given set of points at a large time but the only relevant quantity appearing in the generating function is .one would obviously have reached the same result if one had conditioned by instead of by the .furthermore , as converges quickly to , as illustrated on figure [ fig2bbm ] , we argue that conditioning by at a large time or directly conditioning by should be equivalent , hence ( [ lsint],[ls2int ] ) .kolmogorov , a. , petrovsky , i. , piscounov , n. : tude de lquation de la diffusion avec croissance de la quantit de matire et son application un problme biologique .tat moscou , a * 1*(6 ) , 125 ( 1937 ) | we show that all the time - dependent statistical properties of the rightmost points of a branching brownian motion can be extracted from the traveling wave solutions of the fisher - kpp equation . we show that the distribution of all the distances between the rightmost points has a long time limit which can be understood as the delay of the fisher - kpp traveling waves when the initial condition is modified . the limiting measure exhibits the surprising property of superposability : the statistical properties of the distances between the rightmost points of the union of two realizations of the branching brownian motion shifted by arbitrary amounts are the same as those of a single realization . we discuss the extension of our results to more general branching random walks . |
let be a simply connected subset of the hexagonal faces in the planar honeycomb lattice .two faces of are considered adjacent if they share an edge .suppose further that the boundary faces of are partitioned into a `` left boundary '' component , colored black , and a `` right boundary '' component , colored white , in such a way that the set of interior faces remains simply connected .( see figure [ hesetuphexagons ] . ) given any black white coloring of the faces of , there will be a unique interface separating the cluster of black hexagons containing the left boundary from the cluster of white hexagons containing the right boundary .if the colors are chosen via independent bernoulli percolation , we may view as being generated dynamically as follows : simply begin the path at an edge separating the left and right boundary components ; when hits a black hexagon , it turns right , and when it hits a white hexagon , it turns left .each time it hits a hexagon whose color has yet to be determined , we choose that hexagon s color with a coin toss .the _ harmonic explorer _ ( he ) is a random interface generated the same way , except that each time hits a hexagon whose color has yet to be determined , we perform a simple random walk on the space of hexagons , beginning at , and let assume the color of the first black or white hexagon hit by that walk .( see figure [ hesetuphexagons ] . ) in other words , we color black with probability equal to the value at of the function which is equal to on the black faces and on the white faces , and is _ discrete harmonic _ at the undetermined faces ( i.e. , its value at each such face is the mean of the values on the six neighboring faces ) .denote by the value of this function after steps of the harmonic explorer process ; that is , is if is black , if is white , and discrete harmonic on the faces of undetermined color .note that is also the probability that a random walk on faces , started at , hits a black face before hitting a white face .it is easy to see ( and proved below ) that for any fixed , is a martingale and that the harmonic explorer is the only random path with this property .we will see later that is the only random path with a certain continuous analog of this property .it was conjectured in and proved in that if the interior hexagons are each colored via critical bernoulli percolation ( i.e. , ) , then , in a certain well - defined sense , the random paths tend to the stochastic loewner evolution with parameter ( ) as the hexagonal mesh gets finer .( see the survey for background on sle . )it has been further conjectured that if colors are instead chosen from a critical fk cluster model ( where one weights configurations according to the total number of clusters and the lengths of their interfaces ) , then will converge to some with , where depends on the weight parameters .we will prove that , as the mesh gets finer , the harmonic explorer converges to chordal .there are also natural variants of the harmonic explorer ; for example , we might replace the honeycomb lattice with another three - regular lattice or replace the simple random walk on faces with a different periodic markov chain .one may even use a non - three - regular lattice provided one fixes an appropriate ordering ( say , left to right ) for determining the color of multiple undetermined faces that are `` hit '' simultaneously by the he path . provided the simple random walk converges to brownian motion as the mesh gets finer , we see no barrier to extending our results to all of these settings .our proofs are more like the lerw proofs in ( which hold for general lattices ) than the percolation proof in ( which uses the invariance of the lattice under rotation in an essential way ) . however , for simplicity , we will focus only on the hexagonal lattice in this paper .although physicists and mathematicians have conjectured that many models for random self - avoiding lattice walks have conformally invariant scaling limits [ e.g. , the infinite self - avoiding walk , critical percolation cluster boundaries on two - dimensional lattices , critical ising model interfaces , critical fk cluster boundaries and model strands , etc . ] , rigorous proofs are available only in the following cases : percolation interface on the hexagonal lattice ( which converges to chordal ) , harmonic explorer ( chordal ) , loop erased random walk ( lerw ) on a periodic planar graph ( radial ) , the uniform spanning tree ( ust ) boundary ( chordal ) and the boundaries of simple random walks ( essentially here conformal invariance follows easily from the conformal invariance of brownian motion ) .the harmonic explorer is similar in spirit to the loop erased random walk ( lerw ) and diffusion limited aggregation ( dla ) .all three models are processes based on simple random walks , and their transition probabilities may all be computed using discrete harmonic functions with appropriate boundary conditions . since simple random walks on two - dimensional lattices have a conformally invariant scaling limit ( brownian motion ) , and since harmonicity ( in the continuous limit ) is a conformally invariant property , one might expect that all three models would have conformally invariant scaling limits .however , simulations suggest that dla is not conformally invariant .this paper follows the strategy of , and uses some of the techniques from that paper .we will freely quote results from , and therefore advise the reader to have a copy of on hand while reading the present paper .the purpose of this section is to briefly review some background about loewner s equation and sle , and then present the basic strategy of the paper . for more details ,the reader is encouraged to consult or .let .suppose that \to\overline{{\mathbb{h}}} ] .for every ] which satisfies the so - called _ hydrodynamic _ normalization at infinity the limit ):=\lim_{z\to\infty } z\bigl(g_t(z)-z\bigr)/2\ ] ] is real and monotone increasing in .it is called the ( half plane ) _capacity _ of ] is also continuous in , it is natural to reparameterize so that )=t ] .this limit does exist . )the function is continuous in , and is called the _ driving parameter _ for .one may also try to reverse the above procedure .consider the loewner evolution defined by the ode ( [ e.chordal ] ) , where is a continuous , real - valued function .the path of the evolution is defined as , where tends to from within the upper half plane , provided that the limit exists .the process ( chordal ) in the upper half plane , beginning at and ending at , is the path when is , where is a standard one - dimensional brownian motion .( `` standard '' means and =t ] either in or on the left - hand side of ] is .the latter fact may be seen by conformally mapping the half plane to a strip using the function . ) in other words , for fixed , is the harmonic function that is equal to on one side of ] .the value of this martingale a.s . tends to either zero or as tends to infinity , depending on whether is on the left or the right side of the path ( see , lemma 3 ) .hence , at a fixed time , represents the _ probability _ that , conditioned on the sle path up until time , the point will lie to the left of the path .it is easy to see ( and shown below ) that a discrete version of this property holds for the harmonic explorer .the strategy of our proof will be , roughly speaking , to show that the fact that this property holds at two distinct values of is enough to force the loewner driving process for the path traced by the harmonic explorer to converge to brownian motion .this is because the fact that is a martingale at gives a linear constraint on the drift and diffusion terms at that point , and using two values of gives two linear constraints , from which it is possible to calculate the drift and diffusion exactly .the arguments and error bounds needed to make this reasoning precise are essentially the same as those given in ( but the martingales considered there are different ) .the fact that the loewner driving process converges to brownian motion will enable us to conclude that he converges to in the hausdorff topology .we will then employ additional arguments to show that the convergence holds in a stronger topology .we remark that we will reuse this strategy in to prove that a certain zero level set of the discrete gaussian free field ( defined on the vertices of a triangular lattice , with boundary conditions equal to an appropriately chosen constant on the left boundary and on the right boundary ) converges to chordal . to keep notation consistent with ( which will cite the present paper ), we will use the dual formulation ( representing hexagons by vertices of the triangular lattice ) in our precise statements and proofs below .we now introduce the precise combinatorial notation for he that we will use in our proofs .first , the triangular grid in the plane will be denoted by .its vertices , denoted by , are the sublattice of spanned by and ; two vertices are adjacent if their difference is a sixth root of unity .if , and , let denote the _ inradius _ of about ; that is , .let denote the set of domains whose boundary is a simple closed curve which is a union of edges from the lattice .if is any set of vertices in , and is a bounded function , then there exists a unique bounded function which agrees with in and is harmonic at every vertex in .this function is called the _ discrete harmonic extension _ of .[ in fact , is the expected value of at the point at which a simple random walk started at hits .uniqueness is easily established using the maximum principle . ]let .let denote the set of vertices in .let and be the centers of two distinct edges of the grid on .( see figure [ f.setup ] . )let ( resp . ) be the positively ( resp . negatively ) oriented arc of from to .define to be on , and on .the he [ depending on the triple is a random simple path from to in .let be i.i.d .random variables , uniform in the interval ] .note that is also the discrete harmonic extension of its restriction to , and similarly for .since the harmonic extension is a linear operation and =h_n(v) ] be the he path with the parameterization proportional to arclength , where for .let be a conformal map onto that takes to and to .note that is unique up to positive scaling , and .let . instead of rescaling the grid , we consider larger and larger domains .the quantity turns out to be the appropriate indicator of the size of , from the perspective of the map . indeed , if is small , then the image under of the grid in is not fine near , and we can not expect to look like . as we will see , does approach when .let be the path , parameterized by capacity from in , and let be the path in .let be the metric on given by , where maps onto .if , then is equivalent to , and is equivalent to . note that although we started by mapping our domain to the half plane ( with boundary points , and inradius measured from the preimage of ) , the above metric corresponds to a mapping to the unit disc ( with boundary points , and inradius measure from the preimage of ) .the half plane is the most convenient setting for describing loewner evolution and chordal , but the metric derived from the unit disc map is more convenient because it is compact .[ t.heunifconv ] as , the law of tends to the law of the path , with respect to uniform convergence in the metric . in other words , for every there is some such that if , then there is a coupling of and such that <{\varepsilon}.\ ] ]let denote the loewner driving process for .let be a standard one - dimensional brownian motion. a slightly weaker form of theorem [ t.heunifconv ] will follow as a consequence of the fact that for every the restriction of to ] .this , in turn , will be a consequence of the following local statement .[ p.helocal ] for ] , ] . since is discrete - harmonic , it is approximately equal to the harmonic function on with the corresponding boundary values. the difference can then be approximated by a function of and .applying =0 ] and on and the `` right side '' of ] if , where is the infimum of all possible values for . in that case , , which implies our claim . we will henceforth assume , with no loss of generality , that .note that the above argument also gives a positive lower bound on .now fix some vertex satisfying .let be larger than , in the notation of lemma [ l.harmconv ] .assume now that holds. then we may apply ( [ e.harmconv ] ) with , and . since is a martingale, it satisfies = h_n(w_0) ] in is .( the beurling projection theorem tells us that suffices . ) by conformal invariance of harmonic measure , the harmonic measure from of ] .note that . above, we have seen that there is a constant positive lower bound for . by ( [ e.chordal ] ) , is monotone decreasing in .hence , has a constant positive lower bound for . by ( [ e.chordal ] ), we get for .integrating then gives for .as ] is .consequently , the harmonic measure estimate gives the bound =o(\delta) ] is the set of points hitting the real line under loewner s evolution ( [ e.chordal ] ) in the time interval ] . since , we have .\ ] ] by integrating this relation over ] is . after calculating the derivatives and applying ( [ e.dz ] ) , we get \bigr ] \\ { \noalign { } } & { \displaystyle}{}- { \operatorname{im}}\bigl(\bigl(\phi_n(w_0)-w(t_n)\bigr)^{-1}\bigr ) { \mathbf{e}}\bigl[w(t_m)-w(t_n ) { \vert}\gamma [ 0,n]\bigr ] \\ { \noalign { } } & { \displaystyle}{}- \tfrac12 { \operatorname{im}}\bigl(\bigl(\phi_n(w_0)-w(t_n)\bigr)^{-2}\bigr ) { \mathbf{e}}\bigl [ \bigl(w(t_m)-w(t_n)\bigr)^2{\vert}\gamma [ 0,n]\bigr ] .\end{array}\ ] ] we now assume that . the koebe distortion theorem ( again ,see , section 1.3 ) then implies that a vertex closest to satisfies .the koebe distortion theorem also shows that a vertex closest to satisfies and .consequently , we may apply ( [ e.gettingthere ] ) with replaced by each of . with , we get ( [ e.hemart ] ) . now eliminating the term ] converges in law to the corresponding restriction of standard brownian motion .let , and let \dvtx |w(t/4)|\le{\varepsilon}^{-1}\} ] is unbounded , which implies for .since and , we have .we claim that there is a compact subset , which depends only on and , such that holds for each . indeed , flows according to ( [ e.chordal ] ) starting from at to at . for every ] converges in law to the corresponding stopped brownian motion follows from the proposition and the skorokhod embedding theorem , as in , section 3.3 .standard brownian motion is unlikely to hit before time if is small .thus , we obtain the corollary by taking a limit as .let denote the hausdorff distance ; that is , for two nonempty sets , [ l.parthaus ] for every and there is some so that if , then there is a coupling of and so that ,{\tilde{\gamma}}[0,t])\dvtx { 0\le t\le t}\}>{\varepsilon}\bigr]<{\varepsilon}.\ ] ] we know that is a simple path , from .let be the loewner chain corresponding to , and let be the brownian motion so that the driving process for is . then is obtained by solving ( [ e.chordal ] ) with replaced by .let denote the set of points in whose distance from ] is measurable and satisfies \}\le \delta ] is connected and contains , when ] is within distance from ] and such that \}\to0 ] .since the collection of probability measures on the ( compact ) hausdorff space of closed nonempty subsets of is compact under convergence in law , by passing to a subsequence , if necessary , we get a coupling of and a hausdorff limit of ] .moreover , it is clearly connected .note that the carathodory kernel theorem ( , theorem 1.8 ) implies that the maps \to{\mathbb{h}} ] a.s .since the limit does not depend on the subsequence , it follows that ] . as ] for every rational in ] ) .later , it will be shown that the convergence is uniform when we use the metric on . to understand the issues here ,we describe two examples where one form of convergence holds and another fails .we start with an example similar to one appearing in , section 3.4 .let , and . let be the polygonal path determined by the points , etc .let be the path , , reparameterized by capacity .then the path reparameterized by capacity converges to in the sense of lemma [ l.parthaus ] .moreover , the loewner driving process for converges locally uniformly to the constant , which is the driving process for .however , one can not reparameterize so that locally uniformly . to illustrate the second issue ,consider the polygonal path determined by the points , where the last segment can be chosen as any ray from to in .then , reparameterized by capacity , does converge locally uniformly to .however , it does not converge uniformly with respect to the metric .the purpose of this subsection is to develop a tool which will be handy for proving some upper bounds on probabilities of rare events for the he , the discrete excursion measure .it is a discrete analogue of the ( two - dimensional ) brownian excursion as introduced in , section 2.4 .a slightly different variant of the continuous brownian excursion was studied in .let be a domain in the plane whose boundary is a subgraph of the triangular lattice .( we work here with the simple random walk on , but the results apply more generally to other walks on other lattices . )let denote the set of vertices in .a directed edge of is just an edge of with a particular choice of orientation ( i.e. , a choice of the initial vertex ) .if ] will denote the same edge with the reversed orientation .let denote the set of directed edges of whose interiors intersect and whose initial vertex is in .let denote the set of directed edges of whose interiors intersect and whose terminal vertex is in ; that is , .let and .for every , let be a simple random walk on that starts at and is stopped at the first time such that .let denote the restriction of the law of to those walks that use an edge of as the first step and use an edge of as the last step .( this is zero if is not adjacent to an edge in , and generally it is not a probability measure . ) finally , let .this is a measure on paths starting with an edge in , ending with an edge in and staying in in between .it will be called the discrete excursion measure from to in . when , we will often abbreviate .[ l.dxgreen ] let be as above , and let .fix , and for every path let be the number of times visits .then where denotes the probability that a simple random walk started from will first exit through an edge in .in particular , .let denote the probability space of random walks starting at and stopped when they first exit .for a pair , let denote the reversal of followed by .then is a map from to the support of .clearly , .if is in , the support of , then the cardinality of the preimage is precisely .consequently , we have this proves the claim in the case .the general case is similarly established .[ c.hitball ] let and be as above , and let .assume that is connected .let be the ball centered at whose radius is , and let be the set of paths that visit .then for some absolute constant .it is well known that there is an absolute constant with for every such that is at least half the radius of , where is green s function .see , for example , , ( 3.5 ) , where the radius of the ball is different , but the same proof applies . consequently , given that a random walk hits , its expected number of visits to before exiting is between and .thus , the corollary follows from ( [ e.dxgreen ] ) .it is also important to note that ; that is , the total mass of is equal to that of .this is proved by reversing the paths , which gives a measure - preserving bijection between the support of these two measures .we now return to the setup and notation of section [ s.hesetup ] .fix some ball that intersects ] , under some simple geometric assumptions . to simplify notation , instead of discussing conditional probabilities, we shall instead obtain a bound on \cap b\ne\varnothing] ] to is bounded from zero , except near and , and the result clearly holds when is bounded .let be the event that there is a such that , and on let be the least such .we assume , with no loss of generality , that , say , since otherwise the statement of the proposition is trivial .consider the ball . by our assumptions ,each component of has boundary entirely in or entirely in . on the event ,let be the connected component of intersecting ] .let denote the set of directed edges in whose initial vertex is in .let , where is as in section [ s.heconstruction ] and let denote the set of directed edges connecting vertices in to vertices in .the reason that the measures are useful here is because the total mass of is a martingale .one easy way to deduce this is by considering walks that hit the vertex before any other vertex in ( except for the initial vertex of the walk ) . given the part of such a walk up to its first visit to , the probability that it first exits using an edge from is precisely , which is just the probability that .alternatively , lemma [ l.hmart ] implies that is a martingale , because the total mass of is just a linear combination of the values of on the terminal vertices of .since is a nonnegative martingale , the optional stopping theorem implies that .\ ] ] the bound on ] crosses the annulus , there must be an arc among the connected components of ] ( where takes the value ) to the boundary of ( where takes the value ) .by considering vertices along this arc , we can find a vertex close to from which the ratio between and is bounded and bounded away from zero by universal constants ( because these quantities do not vary by more than a constant factor when moving from a vertex to its neighbor ) , or else there is an edge in . in the latter case ,clearly is bounded away from zero .consider therefore the case where such a exists . since random walk starting from probability bounded away from zero to complete a loop going around the annulus before exiting it , we have bounded away from zero .consequently , each of these summands is bounded away from zero .let be the ball centered at whose radius is half the distance from to . by corollary [ c.hitball ] ,the measure under of the set of paths hitting is bounded away from zero .since is bounded away from zero , a random walk started at any vertex in has probability bounded away from zero to exit in , by the harnack principle ( e.g. , in , lemma 5.2 ) .consequently , we see that also in this case is bounded away from zero on the event by an absolute constant .combining this with ( [ e.optional ] ) and ( [ e.dx0 ] ) establishes =o(r / r)^{{\hat{c}}} ] is a.s .the hausdorff limit of ] converge to ] is a.s .disjoint from both ] .we will begin by restricting our attention to a large compact set and using proposition [ p.hitprob ] to derive upper bounds for the probability that comes close to , where are chosen below so that is `` well exposed '' and so that the assumptions of proposition [ p.hitprob ] apply .let , and let be some compact subset of .for , let be the first such that is in the unbounded connected component of )\ge{\varepsilon}\} ] .( recall that the parameterization of is not by capacity , but is proportional to arclength . )now condition on ] . )let .assume that is sufficiently large so that ; indeed , how large is required to be can be determined from the constant in ( [ e.metriccompare ] ) .the metric comparison ( [ e.metriccompare ] ) implies that .this and the of imply that there is a path in \cup b(z_0,{\rho}{\varepsilon}/(2c))) ] the probability that ] , when it exists .note that we may pass to a subsequence of pairs so that ] . by construction , it is clear that )={\varepsilon} ] , and .let be a rational satisfying .now , the following a.s .statements will hold on the event and .first , by ( [ e.aprob ] ) , we have a.s . . note that \setminus{\tilde{\gamma}}[0,t] ] has two disjoint components , one containing ] [ on the event , .now let be any rational in .since is a simple path , there is some small such that . because is an arbitrary compact subset of , a.s . andin the above discussion was arbitrary , it follows that =\varnothing ] and is a.s .disjoint from \cup{\tilde{\gamma}}[t_1,t] ] a.s . for every pair of rationals , ] .since every sequence of with has a subsequence such that this holds , this also holds without passing to a subsequence .the proposition is thus established . since , is transient ; that is , .the following is a uniform version of this statement .[ p.uniftrans ] for every and there is a such that <{\varepsilon}\ ] ] if is sufficiently large .the reader may note the clear similarity with proposition [ p.hitprob ] .the main difference is that there the path considered was in the domain , whereas here the path is in the image under the conformal map , that is , in the upper half plane . indeed, the proof is quite similar .the following lemma about the excursion measures will be needed .[ l.phidx ] let .suppose that at each vertex a simple random walk is started , and the walk is stopped at the first time such that exits or .then the expected number of walks which stop when is .let .for , let denote a simple random walk on , where at each step the walk jumps with equal probability along each of the edges with and .let .let be the expected number of walks with such that , where .it clearly suffices to show that .( the difference from the is that the are reflected off of , rather than killed there . ) for a function on and , let } ( f(u)-f(v)) ] containing such that . nowlet ] , where the sum runs over all edges in . it is well known ( and simple to show ) that minimizes among functions mapping to and to . for each edge ] is an edge in , then there is a grid triangle with \subset{\partial}\triangle ] .consequently , it suffices to show that <{\varepsilon}/2.\ ] ] let be the least integer such that \supset{\gamma^\phi}[0,t_1] ] , we consider the excursion measure in with excursions started at the vertices in and terminating at vertices in .the total mass of this measure is a martingale .it suffices to show that this is very small at , but is bounded away from zero at on the event .we first do the estimate for .the expected number of excursions in from that hit is the same as the number of excursions in the domain which consists of the grid triangles intersecting starting at vertices in that hit , by symmetry , and this quantity is bounded by the number of excursions in the domain which is essentially starting at vertices in that hit . by symmetry again , this is the same as the expected number of excursions in the reverse direction .this quantity is , by lemma [ l.phidx ] .consequently , the expected number of excursions from in that cross is .it is not hard to see that when is sufficiently large , there will not be any grid edge crossing both and , for example , by considering the harmonic measure from of such an edge .now , , lemma 5.4 tells us that a random walk started in has probability to exit before exiting , uniformly as .( that lemma refers to the square grid , but the proof applies here as well .also , in that lemma the image conformal map is onto the unit disk , but this is simply handled by choosing an appropriate conformal homeomorphism from to . ) it remains to prove a bound from below for measure of excursions in from to , on the event .as in the proof of proposition [ p.hitprob ] , it suffices to find a vertex such that the discrete harmonic measure in ] is bounded from below .consider any vertex near .the continuous harmonic measure from of in the domain is bounded from below . by the convergence of discrete harmonic measure to continuous harmonic measure, when is large , a random walk started at will have probability bounded from below to hit before exiting .( specifically , while not close to the boundary , the random walk behaves like brownian motion , which is conformally invariant . once it does get close to the boundary , we may apply , lemma 5.4 , say . ) as in the proof of proposition [ p.hitprob ] , on we can find a vertex near where the discrete harmonic measure of is comparable to that of .hence , both are bounded away from zero .this completes the proof .proof of theorem [ t.heunifconv ] the theorem follows immediately from propositions [ p.locunif ] and [ p.uniftrans ] .we wish to thank richard kenyon and david wilson for inspiring and useful conversations . | the harmonic explorer is a random grid path . very roughly , at each step the harmonic explorer takes a turn to the right with probability equal to the discrete harmonic measure of the left - hand side of the path from a point near the end of the current path . we prove that the harmonic explorer converges to sle(4 ) as the grid gets finer . and . |
quantum computation and information theory are research areas which have developed amazingly fast .this rapid and accelerated development is due to the promises of qualitatively new modes of computation and communication based on quantum technologies , that in some cases are much more powerful than their classical counterparts , and also due to the impact that these new technologies can provide to society .advances occur in different directions and the theoretical proposals as well as the experimental achievements involves fundamental concepts that live at the heart of quantum mechanics .one of the most remarkable examples is the measurement - based quantum computation .while the standard quantum computation is based on sequences of unitary quantum logic gates , the measurement - based quantum computation is realized using only local conditioned projective measurements applied to a highly entangled state called the cluster state . as it is entirely based on local measurements instead of unitary evolution ,the computation is inherently irreversible in time .recently , this one - way quantum computation has attracted a lot of attention of the scientific community and has been studied considering : ( i ) various important aspects which might influence it , such as decoherence during the computation and ( ii ) novel schemes of implementation .in addition , since the technical requirements for the latter can be much simpler than those for the standard circuit model , it has been realized in several experiments . the noise process , that emerges from the inevitable interaction of the qubits with their environment , remains one of the major problems to be overcome before we shall be able to manufacture a functional quantum computer .the interaction of the qubits with the environment , which depends on the quantum computer architecture , is a crucial problem that is far from being well understood .however , it is known that this interaction can result in non - monotonical dynamics of the density matrix coherences and a complete understanding of this behavior is a matter of great importance . moreover , this non - monotonical behavior , is a subject of broad interest , since this peculiar property brings up unexpected dynamics for quantum fidelity as well as for quantum entanglement and quantum discord .when an open quantum system interacts with the outside world there are two main effects that have to be considered : the relaxation and the decoherence processes .the relaxation process is associated with an expected loss of energy of the initial state of the system which happens at the rate , where the time scale is known as the relaxation time scale of the system . on the other hand ,the decoherence process is associated with the reduction of purity of this physical state and takes place within a time scale .depending on the physical system one considers , the time scale can be much shorter than , making quantum computers more sensitive to decoherence than to relaxation process .this is exactly the situation we are going to address in this paper .based on the foregoing reasoning , we use an exact solvable model to calculate the dissipative fidelity dynamics in a linear measurement - based quantum computer ( mbqc ) composed by few qubits which interact collectively with a dephasing environment .although this kind of model does not describe relaxation processes , it does indeed adequately describe decoherence effects .the interaction hamiltonian is given by , where is the total azimuthal angular momentum in a system of qubits , and is the operator that acts only on the environmental degrees of freedom . we show that for any initial state given by an eigenstate superposition of the operator , whose eigenvalues are different in modulus , the fidelity exhibits a non - monotonical character . to be more precise , suppose we have a system whose initial state can be written in a suitable basis as where , with ; if the vectors that characterize the state are such that their eigenvalues are all equal in modulus , i.e. , if their s are all equal , the fidelity dynamics does not exhibit a non - monotonical shape , but if at least one vector that composes the state has a different from the other vectors we can observe a non - monotonical behavior of the fidelity dynamics .this condition reveals itself to be necessary for the fidelity to present an oscillatory shape , and , remarkably , it does not depend on the initial entanglement ( depending only on the initial configuration of the state of the system ) .furthermore , we study the implications of this oscillatory behavior to the mbqc where a sequence of local projective measurements are applied on the qubits to implement a quantum gate . as we shall see , we can take advantage of the revival times of the coherence of the cluster state to apply the projective measurements at times such that we get the best gate fidelity values .therefore , we can choose the best possible instants that produce the best possible results . in this sense , fast measurements can result worse than slow , but conveniently applied measurements .that is , under the action of a common dephasing environment , the mbqc can provide better computational fidelities even for slow measurement sequences .this result , as we will see bellow , is a natural consequence of the oscillatory behavior of the density - matrix coherences . to illustrate our finding we examine the fidelity of some single qubit quantum gates that are frequently found in the literature developed using the mbqc scheme .this manuscript is organized as follows . in sectionii we describe the exact dissipative dynamics of the -qubit system interacting with a common dephasing environment . in sectioniii we show the necessary condition for the non - monotonical behavior of the fidelity to take place and , in section iv , we present the implications of our results to the mbqc .we conclude in section v.consider the following hamiltonian of a system composed of qubits interacting with a common dephasing environment : where the first two terms account for the free evolution of the qubits and the environment , and the third term describes the interaction between them .the environment operators , and , are the customary creation and annihilation operators which follow the heisenberg s algebra = \delta_{k , k'} ] is the partition function , and , with representing the boltzmann constant and being the environment temperature .now , let us rewrite the hamiltonian of eq .( [ edd.1 ] ) as a sum of two terms , , where and . in the interaction picture , , that is , where .moreover , the time evolution operator is given by where is the dyson time - ordering operator .if we substitute eq .( [ edd.3 ] ) into eq .( [ edd.4 ] ) , we obtain , after some algebra , the following expression for the time evolution operator : \right\}}\nonumber \\ & \times & \exp\left\{i\sum_{k}\sum_{m , n } \vert g_{k}\vert^{2}\sigma^{\left(m\right)}_{z}\sigma^{\left(n\right)}_{z}s\left(\omega_{k},t\right)\right\ } , \label{edd.5}\end{aligned}\ ] ] where and . the dynamics of the -qubit system can be written in terms of the density operator of the combined system as follows . \label{edd.6}\ ] ] hence , the matrix elements of the reduced density matrix can be expressed as where refers to the -qubit state .here we have , i.e. , the are the eigenvalues of the pauli operator associated with the two - level qubit states and , and are the eigenvalues of the pauli operator associated with the respective dual space and .after some manipulations , we finally obtain the explicit dynamics of the elements of the density matrix of the -qubit system : ^{2}\right\ } \nonumber \\ & \times & \exp\left\{i\theta\left(t\right)\left[\left(\sum^{n}_{n=1}i_{n}\right)^{2 } - \left(\sum^{n}_{n=1}j_{n}\right)^{2}\right]\right\}\rho^{q}_{\left\{i_{n},j_{n}\right\}}\left(0\right ) , \nonumber \\ & & \label{edd.12}\end{aligned}\ ] ] where and with . in the continuum limit ,( [ edd.12 ] ) reads ^{2}\right\ } \nonumber \\ & \times & \exp\left\{i\theta\left(t\right)\left[\left(\sum^{n}_{n=1}i_{n}\right)^{2 } - \left(\sum^{n}_{n=1}j_{n}\right)^{2}\right]\right\}\rho^{q}_{\left\{i_{n},j_{n}\right\}}\left(0\right ) , \nonumber \\ & & \label{edd.15}\end{aligned}\ ] ] where and is the environment spectral density .this function has a cutoff frequency , whose value depends on the environment and for .here we should stress that since we are going to analyze average values involving the reduced density operator - the fidelity in our particular case - we can safely use this reduced state of the system in the interaction picture . in the schrdinger picturethere are additional terms oscillating with frequency in the off - diagonal elements of that operator .in our model we assume an ohmic spectral density , where is a dimensionless proportionality constant that characterizes the coupling strength between the system and the environment . substituting eq .( [ edd.17a ] ) into eq .( [ edd.16 ] ) and eq .( [ edd.17 ] ) we obtain and the result of the integration in eq .( [ edd.17b ] ) is also well - known and reads : where .it is easy to note , from eqs .( [ edd.15 ] ) and ( [ edd.17d ] ) , that the decoherence effects arising from thermal noise can be separated from those due to the vacuum fluctuations .this separation allows for an exam of different time scales present in the dynamics .the shortest time scale is determined by the cutoff frequency ( see eq .( [ edd.17c ] ) above ) , .the other natural time scale , , is determined by the thermal frequency ( see eq .( [ edd.17d ] ) above ) which is related to the relaxation of the off - diagonal elements of the density operator .with these two time scales we can define two different regimes of time : the _ thermal _ regime and the _ quantum _ regime .thermal effects will affect the -qubit system predominantly only for times whereas the quantum regime dominates over any time interval such that , when the quantum vacuum fluctuations contribute predominantly .besides , we can see from eqs . ( [ edd.17c ] ) and ( [ edd.17d ] ) that for a sufficiently high - temperature environment , i.e. , , the phase damping factor , which is the main agent responsible for the decoherence , behaves as causing an exponential decay of the off - diagonal elements of the density operator .moreover , as the phase factor implies an oscillation with frequency , this time evolution is always slightly underdamped .notice that one should never reach the overdamped regime since as is the shortest time scale in the problem it does not make any sense to make . in the low temperature limit ,when , the relaxation factor behaves as which leads the off - diagonal matrix elements to an algebraic decay of the form . in this case, what we have called above the thermal regime is only reached for very long times , .as we will show below , the quantum regime implies a very different dynamics of the fidelity and shows up peculiar results to the mbqc , where delayed measurements can result in better computation fidelities .in this section we are interested to know when the fidelity dynamics of an -qubit system , interacting collectively with a dephasing environment , will oscillate in time .we introduce , for the quantum regime , a necessary condition for the non - monotonical behavior to be present . here , since we always consider a pure state as our initial condition , the fidelity as a function of time , in the interaction picture , is given by the dynamics of qubits interacting with a common environment is strongly dependent upon the initial condition , and we will show that , for the quantum regime , the fidelity will always present a non - monotonical behavior when the -qubit system is written as a coherent superposition of eigenstates whose eigenvalues are different in modulus . as we can see from eq .( [ edd.15 ] ) , the second exponential term is responsible for the oscillations and it is identical to the unity when , i.e. , when . note that are the eigenvalues of the total pauli operator associated with the eigenstates .thus , if the initial state of the -qubit system is a coherent superposition of eigenstates of the operator , whose eigenvalues are equal in modulus , the condition is automatically satisfied and the fidelity dynamics does not oscillate at all . on the other hand , a state of qubits that is not written in this way , i.e. , a state that is written as a coherent superposition of the eigenstates whose eigenvalues are not all equal in modulus ( e.g. , if exist at least one eigenvalue different from the others in modulus ) , has a fidelity which indeed oscillates in time . consequently , in the quantum regime , the condition is a necessary condition for the non - monotonical behavior of the fidelity to take place .it is important to emphasize that this behavior is intrinsic to the geometry of the initial condition , that is , it depends only on the basis vectors spanning the initial state , and this property is not correlated with the initial entanglement .a simple example is the two - qubit state given by although disentangled , the state is written as a coherent superposition of eigenstates of the operator whose eigenvalues have different moduli and , therefore , its fidelity oscillates in time following the equation below .\label{fd.16}\ ] ]from now on we will be concerned with the mbqc fidelity dynamics where the cluster states are subject to a dephasing channel .we will show how our previous result brings crucial consequences to the computational outcomes we can obtain , depending on the moment we decide to apply our set of projective measurements . to elucidate these aspects we analyze some common single qubit gates under the mbqc scheme . following reference ,an arbitrary rotation can be achieved in a chain of five disentangled qubits , where the qubits 2 to 5 are initially prepared in the state while the qubit 1 is prepared in some input state which is to be rotated .we adopt the most general form for the input state , where and are complex numbers that satisfy the relation . to obtain the desired cluster state , the state ( [ fd.17 ] )becomes entangled by an unitary operation ( see references and for further details ) where and .the state is rotated by measuring the qubits 1 to 4 one by one , and , at the same time that these measurements disentangle completely the cluster state ( [ ap2.5 ] ) , they also implement a single qubit gate , printing " the outcome on qubit 5 .an arbitrary rotation usually requires three angles in the euler representation , the euler angles , and it can be seen as a composition of three other rotations of the form : , where . in the mbqc scheme , the qubits measured in appropriately chosen bases .of course each measurement can possibly furnish two distinct results , up " ( if the qubit is projected onto the first state of ) or down " ( if the qubit is projected onto the other state of ) , and the choice of the basis to measure the subsequent qubit depends on the previous results . in our examples we suppose ( without incurring the risk of weakening our scheme ) that all measurements give the result _ up _ , which is not an event too rare , with the probability of occurrence . in this case , the first projector acting on the first qubit will be necessarily , irrespectively of the single qubit gate that we want to execute . however , the other three projectors , and that act on qubits 2 , 3 and 4 are dependent on the specific choice of the single qubit gate , and they can be specified by the three euler angles in terms of the first state of each one of the bases , and , respectively . after the first measurement , the resulting four qubit state that evolves under the influence of the environment is given by , where since the state ( [ owqc.2 ] ) is written as a combination of eigenstates of the with eigenvalues which are different in modulus , the fidelity dynamics associated with this state is an oscillatory function of time and is written as +\frac{3}{8}{e^{-4\gamma\left(t , t\right)}}\cos\left(4\theta\left(t\right)\right ) + \left[\frac{1}{16}-\frac{1}{32}\left(\alpha^{*}\beta+\alpha\beta^{*}\right)^{2}\right]{e^{-36\gamma\left(t , t\right)}}\cos \left(12\theta\left(t\right)\right)\nonumber\\ & + & \left[\frac{1}{16}+\frac{1}{32}\left(\alpha^{*}\beta+\alpha\beta^{*}\right)^{2}\right]{e^{-4\gamma\left(t , t\right)}}\cos\left(12\theta\left(t\right)\right ) + \left[\frac{1}{128}-\frac{1}{128}\left(\alpha^{*}\beta+\alpha\beta^{*}\right)^{2}\right]{e^{-64\gamma\left(t , t\right)}}\nonumber\\ & + & \left[\frac{1}{8}-\frac{1}{32}\left(\alpha^{*}\beta + \alpha\beta^{*}\right)^{2}\right]{e^{-16\gamma{\left(t , t\right ) } } } + \frac{5}{128}\left(\alpha^{*}\beta+\alpha\beta^{*}\right)^{2}+\frac{35}{128}.\nonumber\\ \label{qwqc.3}\end{aligned}\ ] ] the state fidelity ( [ qwqc.3 ] ) will present an oscillatory behavior in the quantum regime ( ) where the exponential decay factor ( [ edd.17d ] ) plays a minor role compared with the oscillatory factor ( [ edd.17c ] ) and the quantum vacuum fluctuations contributes predominantly .although eq .( [ qwqc.3 ] ) is written in terms of an input state that is a function of complex coefficients , there is no loss of generality if we regard them as real numbers . in fig .[ statefidelity ] we show the state fidelity dynamics ( [ qwqc.3 ] ) in the quantum regime assuming and , i.e. , we choose the state as our input state . in the quantum regime for ,[ statefidelity ] shows that the state fidelity oscillates between maximum values , such as and , at times and ( where we have peaks ) and minimum values , such as and , at times like and ( where we have valleys ) . ) for in the quantum regime with and given by equations ( [ edd.17c ] ) and ( [ edd.17d ] ) respectively . as we can seethe oscillations is a characteristic feature in the quantum regime when the state is written as a coherent superposition of eigenstates of , whose eigenvalues are different in modulus .here we consider , , and . ] but this oscillatory behavior is not a privilege of the input states or , and more general input states will also present a qualitatively similar oscillatory behavior of the state fidelity dynamics , as we can see directly from eq .( [ qwqc.3 ] ) . with this in mind , what can we say about the fidelity of quantum computation in this peculiar regime ?the question above is relevant in the sense that in any realistic experimental realization , the construction of the state ( [ fd.17 ] ) , the unitary operation of entanglement , as well as the four subsequent projective measurements , are made in a finite time interval rather than instantaneously .hence , it is worth analyzing the computation when the system is subject to the deleterious effects caused by the environment . here , we assume for simplicity that our initial state is given by ( [ owqc.2 ] ) ,that is , our state at is the cluster state ( [ ap2.5 ] ) prepared to perform the one - way quantum computation with the first measurement already applied on qubit .the subsequent measurements , and are supposed to be applied on qubits , , and , respectively , in two different scenarios : in one of them the subsequent measurements are applied in sequence and at different instants of time , i.e. , at time we apply the measurement , at time we apply the measurement and finally at time we apply the measurement ( between the times , and the system evolves according to eq .( [ edd.15 ] ) ) ; in the other scenario the measurements are applied in sequence ( ) but practically at the same time a time that we call ( see fig .[ illustrative_scheme ] ) .[ h ] we consider that the five - qubit state is already entangled and each qubit is ready to be measured . besides , the first qubit is also projected at . in ( a )we suppose that the three subsequent measurements are applied at different instants of time and the qubits evolve non - unitarilly between the measurements . in ( b ) , after wait a time gap , the other three subsequent measurements are made instantaneously at .the result of the computation is printed " in the fifth qubit state ( in green).,title="fig : " ] with these two different scenarios in mind we are able to show our main result . analyzing the implications of the state fidelity oscillations for the mbqc , we could verify that delayed measurements can in fact give better computational fidelity outcomes . as a matter of fact , there are time slots where we obtain better or worse computational fidelities defining periodic optimum waiting times . to clarify this assumption we analyze three different one qubit gate fidelities : the not gate , the hadamard gate and the phase gate .primarily , let us consider the first scenario where the measurements that characterize the specific one qubit gate are performed at different times and the state evolves non - unitarilly between measurements . to begin with , we consider a not gate acting on an input state given by .for this particular example the other three projectors are given by , and , and the result of these projections would be represented by the output state if all measurements had been made before the environment starts its deleterious effect .suppose , on the other hand , that these measurements are performed at later instants of time .let us assume , for instance , that the projections are performed around the first valley of the state fidelity ( see fig .[ statefidelity ] ) , within intervals starting at , that is , , and . obviously we will not obtain a good fidelity for this computation since the state fidelity is very small within this time interval and , in fact , the not gate fidelity is showing that the probability of the output state be the desired state is approximately . now ,if we consider that , and are performed at , and , i.e. , are performed around the first peak of the state fidelity , the not gate fidelity is .but we can obtain better results than these simply choosing another set of instants of time .if , for example , we set the controls to perform our measurements at slightly different times , choosing to apply the measurements in a smaller neighborhood around the first peak , we can get better results such as or for the set respectively at or .therefore , if we perform the measurements around the first valley we obtain a gate fidelity of whereas if we perform them at a later time , waiting to reach the surroundings of the fist peak , we obtain a much better gate fidelity .another possibility can be imagined if we consider that the set of measurements is performed separately at each of the first three consecutive minima of the state fidelity , that is , we apply the first projector at the first valley , the second projector at the second valley and the third projector at the third valley of the state fidelity ; in this case and the not gate fidelity assumes the value of . on the other hand , if we do exactly the contrary , choosing the instants of time of the first three consecutive peaks , we obtain a not gate fidelity of at . considering another example of one qubit gate acting on the same input state , we can analyze the effect of the cluster state s oscillatory behavior on the mbqc fidelity in another situation of interest .as is well known , the hadamard gate transforms the state into the state , so , in an idealized situation , we would expect the output state to assume the desired outcome . on the other hand ,since our cluster state is interacting with the dephasing channel , the outcome of the mbqc can be very different from the expected one , mainly if we choose the wrong instants of time to apply the projective measurements , as we will see .for this particular example , we have , , and , where . admitting that these measurements are performed around the first valley of the state fidelity within intervals at , as before , the hadamard gate fidelity is , while if we consider that , and , and the measurements are performed around the first peak , we have a hadamard gate fidelity of .nevertheless , a much better result can be obtained if ; in which case , the probability of the output state be the desired one is .now , consider again that the set of measurements is performed at the first three consecutive minima of the state fidelity ; again and the gate fidelity assumes the value . on the other hand , choosing the times of the first three consecutive peaks we get a hadamard gate fidelity of at .finally , we examine another one qubit gate example that is often found in the literature .the phase gate under the mbqc can be accomplished with , , and , and the input state is rotated , teleporting and printing the outcome in qubit number 5 whose output state acquires a relative phase assuming the idealized value .once again , taking into account the interaction of our quantum computer " with the dephasing environment , and applying the projectors at , and , we obtain a gate fidelity of while if we wait to apply the projectors at , and we get a gate fidelity of .however , applying the measurements in a smaller neighborhood of the first peak we can get a gate fidelity of at , and . considering that the set of measurements is performed at the first three consecutive minima of the state fidelity , at , and , the value of the gate fidelity is , while if the set of measurements is applied on the first three consecutive maxima we get for the gate fidelity . therefore , depending on the times we choose to perform our set of projective measurements we will obtain better or worse computational fidelity results .now , let us consider the other scenario where the subsequent measurements are performed in sequence but practically at the same time .again we consider the same three examples of gate fidelities .we start with the not gate fidelity dynamics and in fig .[ fig1 ] we show this gate fidelity for and as a function of time in the quantum regime . as in our previous examples, we observe that there are times that maximize the value of the gate fidelity and times that minimize it .it is clear that if all measurements are performed at the computation fidelity is , but if there is a gap between the first and the three subsequent measurements , then there is an optimum value of the time gap . in the example illustrated in fig .[ fig1 ] , if is greater than ( where the gate fidelity is ) the best gate fidelity is obtained for , when it reaches again .if we apply the same gate operation at later times like or , we still get a gate fidelity better than . however ,if we apply this operation at we obtain a gate fidelity of , showing that fast measurements is not a warranty of good mbqc results .when the three subsequent measurements are performed almost simultaneously .we consider , , and . ]considering a hadamard gate fidelity dynamics , we can see from fig .[ fig2 ] that a input state is rotated to the output state with probability greater than at times like , or while it is rotated to the same output state with probability of less than at times like , or . when the three subsequent measurements are performed almost simultaneously .we choose the state to be rotated to .we consider , , and . ][ h ] for the quantum regime .we choose the state to be rotated to .we consider , , and ., title="fig : " ] in fig .[ fig3 ] we can see the phase gate fidelity dynamics and observe once again that the oscillatory behavior presented by the cluster state interacting with the specific kind of quantum channel considered in this paper produces instants of time that optimize the value of the computation compared with times that do exact the opposite . again we see that at times like , or we have a gate fidelity of , and , respectively , while at times such as , or we have a gate fidelity of , and .it is important to emphasize that ultra fast measurements , which have to be performed in the very short bath correlation time scale , can be produced with current technology .these are the basis of the dynamical decoupling techniques that are applied to beat the decoherence process .furthermore , even in this very short time scale , the time that each measurement is applied can be very precise , as we can see , for example , in the experimental realization of the uhrig dynamical decoupling , the carr - purcell - meiboom - gill - style multi - pulse spin echo , and others .this implies that the scenario studied in the manuscript is very realistic and that any mbqc realized with ultra fast measurements needs to account for the oscillatory behavior of the dynamics .we study the exact dynamics of an -qubit system interacting with a common dephasing environment and we introduce a necessary condition for the system fidelity to present a non - monotonical behavior .our approach reveals that this characteristic does not depend on the initial quantum entanglement and , in fact , is a property connected with the geometry of the state .actually , for any initial state given by a superposition of eigenstates of the total pauli operator , the fidelity exhibits a non - monotonical character if at least one of the eigenvalues of the components differs from the others .we show that this behavior of the fidelity brings crucial implications to the mbqc , that is , we show that , under the action of a common dephasing environment , this non - monotonical time dependence can provide us with appropriate time intervals for the preservation of better computational fidelities .we have illustrated our findings by examining the fidelity of a not , a hadamard and a phase quantum gates realized via mbqc .r. raussendorf and h. j. briegel , phys .lett . * 86 * , 5188 ( 2001 ) . h. j. briegel , d. e. browne , w. dr , r. raussendorf and m. van den nest , nature phys . * 05 * , 19 - 26 ( 2009 ) .h. j. briegel and r. raussendorf , phys .lett . * 86 * , 910 ( 2001 ) .d . gross and j. eisert , phys .lett . * 98 * , 220503 ( 2007 ) ; d. gross , j. eisert , n. schuch , and d. perez - garcia , phys .a * 76 * , 052315 ( 2007 ) ; d .gross and j. eisert , phys .a * 82 * , 040303 ( 2010 ) .cai , w. dr , m. van den nest , a. miyake , and h. j. briegel , phys .lett . * 103 * , 050503 ( 2009 ) .t. morimae , phys .a * 81 * , 060307(r ) ( 2010 ) ; d. bacon , s. t. flammia , phys .a * 82 * , 030303(r ) ( 2010 ) ; y. s. weinstein , phys . rev .a * 79 * , 052325 ( 2009 ) ; m. s. tame , m. paternostro , m. s. kim , and v. vedral , phys .a * 72 * , 012319 ( 2005 ) . c. m. dawson , h. l. haselgrove , and m. a. nielsen , phys .a * 73 * , 052306 ( 2006 ) ; o. guhne , f. bodoky , and m. blaauboer , phys . rev .a * 78 * , 060301 ( 2008 ) ; d. cavalcanti , r. chaves , l. aolita , l. davidovich , a. acin , phys .103 * , 030502 ( 2009 ) ; g. gilbert and y. s. weinstein , quantum information and computation viii .edited by e. j. donkor , a. r. pirich , and h. e. brandt , proceedings of the spie , * 7702 * , pp .77020j-77020j-9 ( 2010 ) ; y. s. weinstein and g. gilbert , j. of mod . opt .* 19 * , 1961 ( 2010 ) ; l. aolita , d. cavalcanti , r. chaves , et al . ,phys . rev .a * 82 * , 032317 ( 2010 ) ; r. chaves and f. de melo , phys .a * 84 * , 022324 ( 2011 ) .m. a. nielsen , phys .* 93 * , 040503 ( 2004 ) ; n. c. menicucci , p. van loock , m. gu , _et al_. , phys .* 97 * , 110501 ( 2006 ) ; j .-cai , a. miyake , w. dr , h. j. briegel , phys .a * 82 * , 052309 ( 2010 ) .f. meier , j. levy , and d. loss , phys .lett . * 90 * , 047901 ( 2003 ) ; p. walther , et .al , nature * 434 * , 169 ( 2005 ) ; n. kiesel , _ et al_. , phys .* 95 * , 210502 ( 2005 ) ; y. tokunaga , t. yamamoto , m. koashi , and n. imoto , phys .a * 74 * , 020301 ( 2006 ) ; r. prevedel , m. s. tame , a. stefanov , _et al_. , phys .lett . * 99 * , 250503 ( 2007 ) ; c. y. lu , et .al , nat . phys . * 3 * , 91 ( 2007 ) ; r. ceccarelli , g. vallone , f. demartini , p. mataloni , a. cabello , phys .* 103 * , 160401 ( 2009 ) ; r. kaltenbaek ,_ et al_. , nat . phys . * 6 * , 850 ( 2010 ) .b . gao , _et al_. nature photonics * 5 * , 117 ( 2011 ). j. h. reina , l. quiroga and n. f. johnson , phys .a * 65 * , 032326 ( 2002 ) .h. -p . breuer and f. petruccione ,_ the theory of open quantum systems _ ( oxford university press , oxford , new york , 2002 ) .b. bellomo , r. lofranco , and g. compagno , phys .lett . * 99 * , 160502 ( 2007 ) . m. m. wolf , j. eisert , t. s. cubitt , and j. i. cirac , phys . rev. lett . * 101 * , 150402 ( 2008 ) .breuer , e. -m .laine , and j. piilo , phys .lett . * 103 * , 210401 ( 2009 ) .laine , j. piilo , and h. -p .breuer , phys .a * 81 * , 062115 ( 2010 ) .b. vacchini and h. -p .breuer , phys .a * 81 * , 042103 ( 2010 ) .a. k. rajagopal , a. r. usha devi , and r. w. rendell , phys .a * 82 * , 042107 ( 2010 ) .z. he , j. zou , b. shao , and s. kong , j. phys .b : at . mol .phys * 43 * , 115503 ( 2010 ) .x. xiao , m. fang , and y. li , j. phys .b : at . mol . opt .phys * 43 * , 185505 ( 2010 ) .a. rivas , s. f. huelga , and m. b. plenio , phys .* 105 * , 050403 ( 2010 ) .f. f. fanchini , t. werlang , c. a. brasil , l. g. e. arruda , and a. o. caldeira , phys .a * 81 * , 052107 ( 2010 ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , england , 2000 ) .a. j. legget , s. chakravarty , a. t. dorsey , m. p. a. fisher , a. garg and w. zwerger , rev .* 59 * , 1 ( 1987 ) .s. m. clark , et al .lett . * 102 * , 247601 ( 2009 ) ; j. f. du , nature * 461 * , 1265 ( 2009 ) ; g. de lange , et al .science * 330 * , 60 ( 2010 ) .m. j. biercuk , et al .a * 79 * , 062324 ( 2009 ) .l. viola and s. lloyd , phys .a * 58 * , 2733 ( 1998 ) ; l. viola , e. knill , and s. lloyd , phys .* 82 * , 2417 ( 1999 ) ; l. viola , s. lloyd , and e. knill , phys .lett . * 83 * , 4888 ( 1999 ) . | measurement - based quantum computation is an efficient model to perform universal computation . nevertheless , theoretical questions have been raised , mainly with respect to realistic noise conditions . in order to shed some light on this issue , we evaluate the exact dynamics of some single qubit gate fidelities using the measurement - based quantum computation scheme when the qubits which are used as resource interact with a common dephasing environment . we report a necessary condition for the fidelity dynamics of a general pure -qubit state , interacting with this type of error channel , to present an oscillatory behavior and we show that for the initial canonical cluster state the fidelity oscillates as a function of time . this state fidelity oscillatory behavior brings significant variations to the values of the computational results of a generic gate acting on that state depending on the instants we choose to apply our set of projective measurements . as we shall see , considering some specific gates that are frequently found in the literature , neither fast application of the set of projective measurements necessarily implies high gate fidelity , nor slow application thereof necessarily implies low gate fidelity . our condition for the occurrence of the fidelity oscillatory behavior shows that the oscillation presented by the cluster state is due exclusively to its initial geometry . other states that can be used as resources for measurement - based quantum computation can present the same initial geometrical condition . therefore , it is very important for the present scheme to know when the fidelity of a particular resource state will oscillate in time and , if this is the case , what are the best times to perform the measurements . |
inspired from fredholm integral equations , fredholm learning algorithms are designed recently for density ratio estimation and semi - supervised learning .fredholm learning can be considered as a kernel method with data - dependent kernel .this kernel usually is called as fredholm kernel , and can naturally incorporate the data information . although its empirical performance has been well demonstrated in the previous works , there is no learning theory analysis on generalization bound and learning rate .it is well known that generalization ability and learning rate are important measures to evaluate the learning algorithm . in this paper , we focus on this theoretical theme for regularized least square regression with fredholm kernel . in learning theory literature , extensive studies have been established for least square regression with regularized kernel methods , e.g. , .although the fredholm learning in also can be considered as a regularized kernel method , there are two key features : one is that fredholm kernel is associated with the inner " kernel and the outer " kernel simultaneously , the other is that for the prediction function is double data - dependent .these characteristics induce the additional difficulty on learning theory analysis . to overcome the difficulty of generalization analysis, we introduce novel stepping - stone functions and establish the decomposition on excess generalization error .the generalization bound is estimated in terms of the capacity conditions on the hypothesis spaces associated with the inner " kernel and the outer " kernel , respectively .in particular , the derived result implies that fast learning rate with can be reached with proper parameter selection , where is the number of labeled data .to best of our knowledge , this is the first discussion on generalization error analysis for learning with fredholm kernel .the rest of this paper is organized as follows .regression algorithm with fredholm kernel is introduced in section [ section2 ] and its generalization analysis is presented in section [ section3 ] .the proofs of main results are listed in section [ section4 ] .simulated examples are provided in section [ section5 ] and a brief conclusion is summarized in section [ section6 ] .let be a compact input space and ] such that where is a positive constant independent of .[condition2 ] this approximation condition relies on the regularity of , and has been investigated extensively in . to get tight estimation, we introduce the projection operator it is a position to present the generalization bound . under assumptions [ condition1 ] and [ condition2 ] , there exists where is a positive constant independent of [ theorem1 ] the generalization bound in theorem [ theorem1 ] depends on the capacity condition , the approximation condition , the regularization parameter , and the number of labeled data . in particular , the labeled data is the key factor on the excess risk without the additional assumption on the marginal distribution .this observation is consistent with the previous analysis for semi - supervised learning . to understand the learning rate of fredholm regression , we present the following result where is chosen properly .under assumptions [ condition1 ] and [ condition2 ] , for any , with confidence , there exists some positive constant such that where [ theorem2 ] theorem [ theorem2 ] tells us that fredholm regression has the learning rate with polynomial decay .when , there exists some constant such that with confidence , where ; \\ \frac{2\beta}{s+2\beta+s\beta } , & \beta\in(\frac{2}{2+s},+\infty ] .\\ \end{array } \right.\end{aligned}\ ] ] and the rate is derived by setting ; \\l^{-\frac{2}{s+2\beta+s\beta } } , & \beta\in(\frac{2}{2+s},+\infty ] .\\ \end{array } \right.\end{aligned}\ ] ] this learning rate can be arbitrarily close to as tends to zero , which is regarded as the fastest learning rate for regularized regression in the learning theory literature .this result verifies the lfk in ( [ algorithm1 ] ) inherits the theoretical characteristics of least square regularized regression in rkhs and in data dependent hypothesis spaces .we first present the decomposition on the excess risk , and then establish the upper bounds of different error terms . according to the definitions of , we can get the following error decomposition .[ proposition1 ] for defined in ( [ algorithm1 ] ) , there holds where and * proof * : by introducing the middle function , we get \\ & & + \mathcal e_{\mathbf z}(l_{w,\mathbf{x}}f_{\lambda})-\mathcal e(l_{w,\mathbf{x}}f_{\lambda})+\mathcal e(l_{w,\mathbf{x}}f_{\lambda})-\mathcal e(l_{w}f_{\lambda } ) + \mathcal e(l_{w}f_{\lambda})-\mathcal e(f_\rho)+\lambda\|f_{\lambda}\|_k^2 \\ & \leq & e_1+e_2+e_3+d(\lambda)\end{aligned}\ ] ] where the last inequality follows from the definition . this completes the proof. in learning theory , are called the sample error , which describe the difference between the empirical risk and the expected risk . is called the hypothesis error which reflects the divergence of expected risks between the data independent function and data dependent function .we introduce the concentration inequality in to measure the divergence between the empirical risk and the expected risk .let be a measurable function set on .assume that , for any , and for some positive constants .if for some and , for any , then there exists a constant such that for any , with confidence at least . [ lemma1 ] to estimate , we consider the function set containing for any , .the definition in ( [ algorithm1 ] ) tells us that .hence , with and .[ proposition2 ] under assumption [ condition1 ] , for any , with confidence .* proof * : for , denote for any , moreover , for any , there exists this relation implies that where the last inequality from assumption [ condition1 ] . applying the above estimates to lemma [ lemma1 ], we derive that with confidence . considering with , we obtain the desired result . [ proposition3 ] under assumption 1 , with confidence , there holds where is a positive constant independent of . * proof * : denote from the definition , we can deduce that with .for , define it is easy to check that for any then , for any , there exists then from assumption [ condition1 ] , combining ( [ p11])-([p33 ] ) with lemma [ lemma1 ] , we get with confidence considering , we get the desired result. the following concentration inequality with values in hilbert space can be found in , which is used in our analysis .let be a hilbert space and be independent random variable on with values in .assume that almost surely .let be independent random samples from .then , for any , holds true with confidence .now we turn to estimate , which reflects the affect of inputs to the regularization function .[ proposition ] for any , with confidence , there holds * proof * : note that denote , which is continuous and bounded function on .then and we can deduce that and . from lemma [ lemma2 ] , for any , there holds with confidence combining ( [ p111 ] ) and ( [ p222 ] ) , we get with confidence , then , the desired result follows from . * proof of theorem 1 : * combining the estimations in propositions 1 - 4 , we get with confidence , considering , for , we have with confidence ,\end{aligned}\ ] ] where is a constant independent of .* proof of theorem 2 : * when setting , we obtain .then , theorem [ theorem1 ] implies that when setting , we get .then , with confidence this complete the proof of theorem 2 .to verify the effectiveness of lfk in ( [ algorithm1 ] ) , we present some simulated examples for the regression problem . the competing method is support vector machine regression ( svm ) , which has been used extensively used in machine learning community ( https://www.csie.ntu.edu.tw/ cjlin / libsvm/ ) .the gaussian kernel is used for svm .for lfk in ( [ algorithm1 ] ) , we consider the following `` inner '' and `` outer '' kernels : * lfk1 : and .* lfk2 : and . * lfk3 : and .here the scale parameter belongs to ] for lfk and svm .these parameters are selected by 4-fold cross validation in this section .the following functions are used to generate the simulated data : \\ f_2(x)&=&xcos(x),~~x\in[0,10]\\ f_3(x)&=&\min(2|x|-1,1),~~x\in[-2,2]\\ f_4(x)&=&sign(x),~~x\in[-3,3].\\\end{aligned}\ ] ] note that is highly oscillatory , is smooth , is continuous not smooth , and is not even continuous .these functions have been used to evaluate regression algorithms in .c|ccccc function & number & svm & lfk1 & lfk2 & lfk3 + & 50 & & & & + & 300 & & & & + & 50 & & & & + & 300 & & & & + & 50 & & & & + & 300 & & & & + & 50 & & & & + & 300 & & & & + [ tab1 ] in our experiment , gaussian noise is added to the data respectively . in each test , we first draw randomly 1000 samples according to the function and noise distribution , and then obtain a training set randomly with sizes respectively .three hundred samples are selected randomly as the test set .mean squared error _ ( mse ) is used to evaluate the regression results on synthetic data . to make the results more convincing, each test is repeated 10 times .table [ tab1 ] reports the average mse and _ standard deviation _ ( std ) with 50 training samples and 300 training samples respectively .furthermore , we study the impact of the number of training samples on the final regression performance .figure 1 shows the mse for learning with numbers of training samples .these results illustrate that lfk has competitive performance compared with svm .this paper investigated the generalization performance of regularized least square regression with fredholm kernel .generalization bound is presented for the fredholm learning model , which shows that the fast learning rate with can be reached . in the future , it is interesting to investigate the leaning performance of ranking with fredholm kernel . the authors would like to thank prof.dr.l.q .li for his valuable suggestions .this work was supported by the national natural science foundation of china(grant nos .11671161 ) and the fundamental research funds for the central universities ( program nos .2662015py046 , 2014py025 ) .99 l. shi , y. feng , and d.x .zhou , `` concentration estimates for learning with -regularizer and data dependent hypothesis spaces , '' _ appl .2 , pp . 286302 , 2011 .b. zou , r. chen , and z.b .xu , `` learning performance of tikhonov regularization algorithm with geometrically beta - mixing observations , '' _ journal of statistical planning and inference _ , vol .10771087 , 2011 . | learning with fredholm kernel has attracted increasing attention recently since it can effectively utilize the data information to improve the prediction performance . despite rapid progress on theoretical and experimental evaluations , its generalization analysis has not been explored in learning theory literature . in this paper , we establish the generalization bound of least square regularized regression with fredholm kernel , which implies that the fast learning rate can be reached under mild capacity conditions . simulated examples show that this fredholm regression algorithm can achieve the satisfactory prediction performance . fredholm learning , generalization bound , learning rate , data dependent hypothesis spaces |
with the atmospheres of exoplanets now accessible to astronomical scrutiny , there is motivation to understand the basic physics governing their structure .since highly - irradiated exoplanets are most amenable to atmospheric characterization , a growing body of work has focused on hot earths / neptunes / jupiters , ranging from analytical models to simulations of atmospheric circulation ( e.g. , ) , in one , two and three dimensions ( 1d , 2d and 3d ) . the path towards full understanding requires the construction of a hierarchy of theoretical models of varying sophistication . in this context ,analytical models have a vital role to play , since they provide crisp physical insight and are immune to numerical issues ( e.g. , numerical viscosity , sub - grid physics , spin - up ) .atmospheres behave like heat engines .sources of forcing ( e.g. , stellar irradiation ) induce atmospheric motion , which are eventually damped out by sources of friction ( e.g. , viscosity , magnetic drag ) .it is essential to understand atmospheric dynamics , as it sets the background state of velocity , temperature , density and pressure that determines the spectral and temporal appearance of an atmosphere .it also determines whether an atmosphere attains or is driven away from chemical , radiative and thermodynamic equilibrium .even if an atmosphere is not in equilibrium , it must be in a global state of equipoise sources of forcing and friction negate one another ( e.g. , ) . in the present study , this is our over - arching physical goal : to analytically derive the global , steady state of an exoplanetary atmosphere ( the exoclime " ) in the presence of forcing , friction , rotation and magnetic fields .-plane ( pseudo - spherical geometry ) and in full spherical geometry .the key quantity to solve for is the meridional ( north - south ) velocity , from which the zonal ( east - west ) velocity , shallow water height perturbation and magnetic field perturbations straightforwardly follow . ][ fig : schematic ] shallow water models are a decisive way of studying exoclimes .the term shallow water " comes from their traditional use in meteorology and oceanography and refers to the approximation that the horizontal extent modeled far exceeds the vertical / radial one .they have been used to understand the solar tacocline , the atmospheres of neutron stars and exoplanetary atmospheres .our over - arching technical goal is to perform a comprehensive theoretical survey across dimensionality ( 1d and 2d ) , geometry ( cartesian , pseudo - spherical and spherical ) and sources of friction ( molecular viscosity , rayleigh drag and magnetic drag ) for forced , damped , rotating , magnetized systems . to retain algebraic tractability , we study the limits of purely vertical / radial or horizontal / toroidal background magnetic fields .our main finding is that the global structure of exoplanetary atmospheres is largely controlled by a single , key parameter at least in the shallow water approximation which we denote by . in the hydrodynamic limit , directly related to the rossby and prandtl numbers . in forced ,damped atmospheres with non - ideal mhd , encapsulates the effects of molecular viscosity , rayleigh drag , forcing , magnetic tension and magnetic drag . due to the technical nature of the present study, we find it useful to concisely summarize a set of terminology that we will use throughout the paper . from a set of perturbed equations, we will obtain wave solutions for the velocity , height and magnetic field perturbations .we assume that the waves have a temporal component of the form , where is the wave frequency .generally , the wave frequency has real and imaginary components ( ) , which describe the oscillatory and growing or decaying parts of the wave , respectively . for each model, we will obtain a pair of equations describing and , which we call the oscillatory dispersion relation " and the growth / decay dispersion relation " , respectively. we will refer to them collectively as dispersion relations " .a balanced flow " corresponds to the situation when .the steady state " of an atmosphere has .we will refer to molecular viscosity , rayleigh drag and magnetic drag collectively as friction " .if we are only considering molecular viscosity and rayleigh drag , we will use the term hydrodynamic friction " .we will use the terms friction " and damping " interchangeably , but not qualify the latter as being hydrodynamic or mhd in nature .we will refer to systems as being free " if forcing and friction are absent .similarly , we will use the terms fast " and rapid " interchangeably when referring to rotation .the zeroth order effect of including a magnetic field introduces a term each into the momentum and magnetic induction equations we will refer to the effects of these ideal - mhd terms as magnetic tension " .when non - ideal mhd is considered , we include a resistive term in the induction equation that mathematically resembles diffusion we will refer to its influence as magnetic drag " . we will examine an approximation for including the effects of non - constant rotation , across latitude , on a cartesian grid , known traditionally as the -plane approximation " ( e.g. , and references therein ) .note that this is _ not _ the same as a departure from solid body rotation .rather , it is an approximation to include the dynamical effects of sphericity .there are two flavors of this approximation : the simpler version solves for waves that are oscillatory in both spatial dimensions ( simply called -plane " ) , while a more sophisticated version allows for an arbitrary functional dependence in latitude ( equatorial -plane " ) .since the equatorial -plane treatment more closely approximates the situation on a sphere , we will refer to it as being pseudo - spherical " . in constructing the mathematical machinery in this paper, we often have to evaluate long , complex expressions . to this end, we find it convenient to separate out the real and imaginary components using a series of separation functions " , which we denote by , , , , and .the definitions of these dimensionless quantities vary from model to model .we note that also functions like a generalized friction that includes magnetic tension .in [ sect : equations ] , we state the governing equations and derive their linearized , perturbed forms . in [ sect:1d ] , we review and extend the 1d models .we extend our models to 2d cartesian geometry in [ sect:2d ] and begin to consider the effects of sphericity in [ sect:2d_beta ] . in [ sect : spherical ] , we present results for 2d models in spherical geometry .applications to exoplanetary atmospheres are described in [ sect : apps ] .we summarize our findings in [ sect : discussion ] .table 1 lists the most commonly used quantities and the symbols used to denote them .table 2 compares our study with previous analytical work .table 3 summarizes the salient lessons learnt from studying each shallow water model .figure [ fig : schematic ] provides a graphical summary of our technical achievements .[ tab : symbols ] .commonly used symbols [ cols="<,^,^",options="header " , ] + note : all 2d models include rotation .+ hd : hydrodynamic .mhd : magnetohydrodynamic .some useful , commonly used expressions include the expression involving makes the approximation that . | within the context of exoplanetary atmospheres , we present a comprehensive linear analysis of forced , damped , magnetized shallow water systems , exploring the effects of dimensionality , geometry ( cartesian , pseudo - spherical and spherical ) , rotation , magnetic tension and hydrodynamic and magnetic sources of friction . across a broad range of conditions , we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical , even when forcing ( stellar irradiation ) , sources of friction ( molecular viscosity , rayleigh drag and magnetic drag ) and magnetic tension are included . the global atmospheric structure is largely controlled by a single , key parameter that involves the rossby and prandtl numbers . this near - universality breaks down when either molecular viscosity or magnetic drag acts non - uniformly across latitude or a poloidal magnetic field is present , suggesting that these effects will introduce qualitative changes to the familiar chevron - shaped feature witnessed in simulations of atmospheric circulation . we also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways , implying that using rayleigh drag to mimic magnetic drag is inaccurate . we exhaustively lay down the theoretical formalism ( dispersion relations , governing equations and time - dependent wave solutions ) for a broad suite of models . in all situations , we derive the steady state of an atmosphere , which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres . we elucidate a pinching effect that confines the atmospheric structure to be near the equator . our suite of analytical models may be used to decisively develop physical intuition and as a reference point for three - dimensional , magnetohydrodynamic ( mhd ) simulations of atmospheric circulation . |
in the context of image analysis and image processing a variety of generative models for digitalized picture arrays in the two dimensional plane have been proposed . out of the different techniques adopted for various models ,grammar based techniques utilize the rich theory of formal grammars and languages and develop array grammars generating two dimensional languages whose elements are picture arrays .there are two distinct types of array grammars , isometric array grammars and non - isometric array grammars .since application of rewriting rule can increase or decrease the length of the rewritten part , the dimension of rewritten sub array can change in the case of non - isometric grammars but application of such a rule is shape preserving in the case of isometric grammars due to the fact that the left and right sides of an array rewriting rule is geometrically identical . in order to handle more context with rewriting systems , a system with several componentsis composed and defined a cooperation protocol for these components to generate a common sentential form .such devices are known as cooperating distributed ( cd)grammar systems .components are represented by grammars or other rewriting devices , and the protocol for mutual cooperation modifying the common sentential form according to their own rules . a variety of string grammar system models have been introduced and studied in the literature .rudolf freund extended the concept of grammar system to arrays by introducing array grammar system and further j. dassow , r. freund and gh .p elaborated the power of cooperation in array grammar system ( cooperating array grammar system ) for various non - context - free sets of arrays which can be generated in a simple way by cooperating array grammar systems and simple picture description .they also proved that the cooperation increases the generative capacity even in the case of systems with regular array grammar components .different kinds of control mechanism that are added to component grammars for regulated rewriting rules have been considered in string grammar systems and such control devices are known to increase the generative power of the grammar in many cases .random context grammar is viewed as one of the prototype mechanism in which components grammars that permit or forbid the application of a rule based on the presence or absence of a set of symbols .hexagonal arrays and hexagonal patterns are found in the literature on picture processing and image analysis . the class of hexagonal kolam array language ( hkal )was introduced by siromoneys .the class of hexagonal array language was introduced by subramanian .the class of local and recognizable picture languages were introduced by dersanambika et.al .recently we extended cooperative distributed grammar system to hexagonal arrays and different capabilities of the system are studied .in this paper we associate permitting symbols with rules of the grammar in the components of cooperating distributed context - free hexagonal array grammar systems as a control mechanism and investigating the generative power of the resulting systems in the terminal mode .this feature of associating permitting symbols with rules when extended to patterns in the form of connected arrays also requires checking of symbols , but this is simpler than usual pattern matching .the benefit of allowing permitting symbols is that it enables us to reduce the number of components required in a cooperating distributed hexagonal array grammar system for generating a set of picture arrays .let be a finite non - empty set of symbols .the set of all hexagonal arrays made up of elements of is denoted by .the size of the hexagonal array is defined by the parameters (left upper ) , (left lower ) , (right upper ) , (right lower ) , (upper ) , (lower ) as shown in figure 1 . for the length of the left upper side of denoted by .similarly we define ,,, and .figure 1 an isometric hexagonal array grammar is a construct where and are disjoint alphabets of non terminals and terminals respectively , is the start symbol , is a special symbol called blank symbol and is a finite set of rewriting rules of the form where and are finite subpatterns of a hexagonal pattern over satisfying the following conditions : 1 .the shape of and are identical .2 . contains at least one element of the elements of appearing in are not rewritten .a non symbol in is not replaced by a blank symbol in 4 . the application of the production preserves connectivity of the hexagonal array . for a hexagonal array grammar can define for if there is a rule such that is a subpattern of and is obtained by replacing in by the reflexive closure of is denoted by the hexagonal array language generated by is defined by a hexagonal array grammar is said to be context free if in the rule 1 . non symbol in are not replaced by in 2 . contain exactly one non - terminal and some occurrences of blank symbol ..the family of languages generated by a context free hexagonal array grammar is denoted by .a context free hexagonal array grammar is said to be regular if rules are of the form the family of languages generated by a regular hexagonal array grammar is denoted by a cooperating hexagonal array grammar system ( of type and degree ) , is a construct where and are non - terminal and terminal alphabets respectively , and are finite sets of regular respectively context free rules over let be a cooperating hexagonal array grammar system .let then we write if and only if there are words such that 1 . 2 . that is , moreover , we write + if and only if for some + if and only if for some + if and only if for some + if and only if and there is no with by we denote family of hexagonal array language generated by cooperating hexagonal array grammar system consisting of at most components of type in the mode a random context grammar is a quadruple where is the alphabet of non - terminals , is the alphabet of terminals such that is the start symbol , and is a finite set of productions of the form where is a context free production, and ,and . for and a production ,the relation holds provided that and .a permitting ( forbidding)grammar is a random context grammar where for each production + , it holds that set of all symbols in the labeled cells of the array is denoted by alph a permitting cf hexagonal array rule is an array grammar is of the form where is a context - free hexagonal array rewriting rule and , where the set of non -terminals of the grammar .if , then we avoid mentioning it in the rule.for any two arrays and a permitting cf hexagonal array rule , the array is derived from by replacing in by provided that .a permitting cooperating distributed context - free hexagonal array grammar systems ( pcdcfhags ) is where is a finite set of non terminals, is the start symbol, is a finite set of terminals , and each , for is a finite set of permitting cf hexagonal array rewriting rules . for any two hexagonal arrays , we denote an array rewriting step performed by applying a permitting cooperating cf hexagonal array rule in , and by the transitive closure of also we say that the array derives an array in the terminal mode or mode and write , if and there is no array such that the array language generated by in the mode is defined as for note that is any sequence of symbols belonging to where repeated symbols are allowed . also denote the family of array languages generated in the mode by permitting cooperating cf hexagonal array grammar systems with at most components the pcdcfhags where consists of the following rules . generates ( in the mode ) the set of all arrays over in the shape of a left arrow head with left upper arm and left lower arm are equal in size ( figure 3 ) . the derivations starts with rule 1 followed by rule 2 which can be applied as the permitting symbol is present in the array .this grows left upper arm ( lu ) by one cell .then rule 3 can be applied due to the presence of permitting symbol and this grows left lower arm ( ll ) by one cell .an application of rule 4 followed by 5 , again noting that the permitting symbols of the respective rules are present changes to and to .the repeated application of the process growing both the left upper arm and left lower arm equal in size .rules 6 and 7 are applied changing to and to so that the derivation can be terminated by the application of rules 8 and 9 thus yielding a hexagonal array in the shape of a left arrow head with size of and are equal .a where the set of permitting symbols in all the components is empty , is simply a cooperating distributed hexagonal array system ( ) . the family of array languages generated in the mode by a with at most components is denoted by if the rules in all the components are only in the form of rules of a regular array grammar , then this family is denoted by we now show that the set of all arrays over in the form of hollow hexagonal frame .figure 3 can be generated ( in the mode ) by a with only two components .the set is generated ( in the mode ) by the .the rules in the component are given by rules in the component are given by using - mode of derivation , starting with the symbol an application of rule ( 1 ) in the first component followed by rule(2 ) which can be applied as the permitting symbol is present in the array , grows in the arm by one place .since is the permuting symbol for rule ( 3 ) , rule ( 3 ) can then applied which results the growth of arm by one place .now the situations are ready for applying rules ( 4 ) and ( 5 ) and at this stage becomes and becomes .the process can be repeated and this in turn results the growth of and arms equal in length . instead of rule( 4 ) rule ( 6 ) is applied followed by ( 7),(8),(9),(10),(11 ) allows upper and lower arms to grow in equal length , with permitting symbols in all these rules directing the sequence of applications in the right order .if rule ( 12 ) is used instead of rule ( 10 ) and this is followed by rule ( 13 ) right upper ( ) and right lower ( ) arms grows equal in size and correct application of rule ( 18 ) and ( 19 ) will result in the symbol in the arm and in the lower right arm where is at the position of right end point of and arm so that further application of productions in is not possible at any non - terminals . at this stage applying productions in and this in turn terminate the derivation yielding a hollow hexagon with its parallel arms are equal in size .the equality follows from the results from .we know that , a with empty set of permitting symbols associated with the rules is same as a cooperating distributed hexagonal array grammar system and so if .examples ( 1 ) and ( 2 ) illustrated the fact that the set of all hexagonal arrays over in the shape of left arrow head with left upper arm and left lower arm with equal size is generated by the with only one component and working in the -mode and hence the inclusion is proper which proves ( 1 ) .similar arguments for inclusion in statement(2 ) are hold . from the proof of the lemma(1 ) it is very clear that under the strict application of the derivation rules with the respective permitive symbols generate a hollow hexagon with parallel arms equal in size . such a generation is not possible in since and incorrect application of terminating rule will leads to non - completion of the hollow hexagon with parallel arms equal in size .thus consider the array languages generated by in example ( 1 ) and in the proof of lemma ( 1 ) .it can be seen that generating the arrow head of patterns of the language , we should require two growing heads at the same time .but in the with any number of components the array rules contains only one growing head .so the same language can not be generated by and this proves ( 3 ) . to show the power of the cooperating hexagonal array grammar system with the array rules controlled by permitting symbols , consider the following example of a language of set of all hexagons with parallel arms are equal in length over a one letter alphabet .starting with repeated application of the first five rules of the component in this order generate having equal arms , with arm having non - terminal and arm having non - terminal once two rules and of component are used , the generations of these two arms end with terminal and then starts the generations of upper and lower arms of the hexagon using rules ( 6 ) to ( 11 ) .then the right application of rule ( 12 ) to ( 17 ) then ( 18 ) , ( 19 ) , ( 45 ) subjected to the permitting symbols will result to a hexagonal picture and finally by the application of rules in we get the required hexagon over the one letter alphabet as in figure 5 . in the siromoney matrix grammar ( 9 )rectangular arrays are generated in two phases ; one in horizontal and the other in vertical .further it was extended by associating a finite set of rules in the second phase of generation with each table having either right linear non - terminal rules of the form or right - linear terminal rules of the form and such array languages are denoted by and and we have a well known result ( refer 11 ) .correspondingly it can be established for hexagonal arrays .here we compare with these classes . except the rules in the and arm with a middle marker and the symbols above and below in both arms and which are equal in number .the first two rules of changes the symbols in the left most cell in to .then the remaining rules of and the last two rules of ( , ) and the rules of the component generate the arms such that each cell in the middle arm in the horizontal direction is made up of and all other cells above and below are made up of except the leftmost arms.the generation of the cells finally terminates , yielding the rightmost cells are rewritten by .thus a hexagonal array in the shape of a left arrowhead ( describing as figure(6 ) is generated).if we treat as blank , such arrays can not be in and which in turn proves that .* conclusion .* in this paper , the picture array generating power of cooperating hexagonal array grammar systems endowed with permitting symbols are studied .it is seen that the control mechanism which we here used namely the permitting symbols is shape preserving in picture generation and also it reduce size complexity .h. bordihm , m. holzer , in : c. matin - vide , f. otto , h. fernau ( eds . ) , _ random context in regulated rewriting versus cooperating distributed grammar systems _ , in : lecture notes in computer science , vol . 5159 , springer - verlag , 2008,pp 125136 .k. s. dersanambika , k. krithivasan , martin - vide , k. g. subramanian , _ local and recognizable hexagonal picture languages _ ,international journal of pattern recognition and artificial intelligence , 19(7 ) , 2012 , pp 553571 .k. s. dersanambika , k. krithivasan , h. k. agarwal , j. guptha _ hexagonal contextual array p - systems , formal models , languages and application _ , series in machine perception artificial intelligence 66 , 2006 , pp 7996 . | in this paper we associate permitting symbols with rules of grammars in the components of cooperating distributed context - free hexagonal array grammar systems as a control mechanism and investigating the generative power of the resulting systems in the terminal mode . this feature of associating permitting symbols with rules when extended to patterns in the form of connected arrays also requires checking of symbols , but this is simpler than usual pattern matching . the benefit of allowing permitting symbols is that it enables us to reduce the number of components required in a cooperating distributed hexagonal array grammar system for generating a set of picture arrays . * subject classification : 68rxx * * keywords : hexagonal arrays , cooperating hexagonal array grammar systems , generative power * |
in treatment planning of radiotherapy with protons and heavier ions , the pencil - beam ( pb ) algorithm is commonly used ( hong 1996 , kanematsu 1998 , 2006 , schaffner 1999 , krmer 2000 ) , where a radiation field is approximately decomposed into two - dimensionally arranged gaussian beams that receive energy loss and multiple scattering in matter . in the presence of heterogeneity, these beams grow differently to reproduce realistic fluctuation in the superposed dose distribution .comparisons with measurements and monte carlo ( mc ) simulations , however , revealed difficulty of the pb algorithm at places with severe lateral heterogeneity such as steep areas of a range compensator and lateral interfaces among air , tissue , and bone in a patient body ( goitein 1978 , petti 1992 , kohno 2004 , ciangaru 2005 ) .one reason for the difficulty is that particles in a pencil beam are assumed to receive the same interactions , whereas they may be spatially overreaching beyond the density interface .the other reason is that only straight paths radiating from a point source are considered in beam transport , whereas actual particles may detour randomly by multiple scattering .schneider ( 1998 ) showed that a phase - space analysis could address the overreach and detour effects for a simple lateral structure .schaffner ( 1999 ) and soukup ( 2005 ) subdivided a physical spot beam virtually into smaller beams to naturally reduce overreaches .pflugfelder ( 2007 ) quantified lateral heterogeneity , with which subdivision and arrangement could be optimized .unfortunately , those techniques are ineffective against beam - size growth during transport . for electrons ,the overreach and detour effects are intrinsically much severer .shiu and hogstrom ( 1991 ) developed a solution , the pb - redefinition algorithm , where minimal pencil beams are occasionally regenerated , considering electron flows rigorously .the same idea was in fact partly applied to heavy particles for beam customization ( kanematsu 2008b ) , but the poly - energetic beam model to deal with heterogeneity could be seriously inefficient in high - resolution calculations necessary for bragg peaks . in this study ,we develop an alternative method to similarly address the overreach and detour effects . in the following sections , we incorporate our findings on the gaussian distribution into the pb algorithm , test the new method in a carbon - ion beam experiment , and discuss the results and practicality for clinical applications .the pb algorithm in this study basically follows our former works ( kanematsu 1998 , 2006 , 2008b ) .a pencil beam with index is described by position , direction , number of particles , residual range , angular variance , angular - spatial covariance , and spatial variance of the involved particles . as described in [ sec_appendix ] , these parameters are initialized and modified with transport distance . the resultant beams with variance superposed to form dose distribution where is the beam- origin , is the distance at the closest approach to point , is its equivalent water depth , and and are the tissue - phantom ratio and the beam range in water .any normalized gaussian distribution with mean and standard deviation can be represented with the standard normal distribution as incidentally , we have found that binomial gaussian function ,\end{aligned}\ ] ] reasonably approximates as shown in ( a ) , where we first fixed symmetric displacement for the binomial terms and determined their reduced standard deviation to conserve variance .similarly , the daughter gaussian terms in splits into grand daughters to form approximate function ,\end{aligned}\ ] ] and then into grand - grand daughters to form approximate function ,\end{aligned}\ ] ] as shown in figures [ fig : splitting](b ) and [ fig : splitting](c ) .summarizes size - reduction , displacement , and share - fraction factors for splitting with ( ) .further splitting with the same displacement is not possible with valid ( ) gaussian terms .( gray area ) and its approximate functions ( a ) , ( b ) , and ( c ) ( solid lines ) comprised of multiple , displaced , narrowed , and scaled gaussian distributions ( dashed lines).,width=491 ] lcccc factor name & symbol & & & + size reduction & & & & + displacement & & & & + share fraction & & & & + [ tab : splitting ] an overreaching gaussian beam may split two - dimensionally into smaller beams with these approximations .because beam multiplication will explosively increase computational amount , it must be applied only when and where necessary with optimum multiplicity for required size reduction . in a grid - voxel patient model with density distribution , we define density gradient vector as }{\delta_g}\ , \vec{e}_g,\end{aligned}\ ] ] where and are the grid interval and the basis vector for axis as shown in and operation ] ) are initialized as where is the displaced position, is the radial direction from the focus or the virtual source ( icru-35 1984 ) of the mother beam , is the number of shared particles , is the conserved residual range , is the reduced spatial variance , and and conserve focal distance and local angular variance in splitting .the mother beam splits into the daughter beams to form different detouring paths .sets of the initial parameters for daughter beams are sequentially pushed on the stack of computer memory and the last set on the stack will be the first beam to be transported in the same manner , which will be repeated until the stack has been emptied before moving on to the next original beam .an experiment to assess the present method was carried out with accelerator facility himac at national institute of radiological sciences .a beam with nucleon kinetic energy mev was broadened to a uniform field of nominal 10-cm diameter by the spiral - wobbling method ( yonai 2008 ) . the horizontal wobbler at cm and the vertical wobbler at cm formed a spiral orbit of maximum 10-cm radius on the isocenter plane .a 0.8-mm - thick pb ( , cm ) foil was placed at cm as a scatterer , which increased the instantaneous rms beam size from pristine 8.3 mm to 25 mm at the isocenter .a large - diameter parallel - plate ionization chamber was placed at cm for dose monitoring and beam - extraction control .an al ( , cm ) ridge filter for semi - gaussian range modulation of cm and cm in water ( schaffner 2000 ) and a 2-mm - thick al base plate were inserted at cm to moderate the bragg peak just to ease dosimetry . as shown in , a water ( , cm ) tank with a 1.9-cm - thick pmma ( , cm ) beam - entrance wall was placed at the irradiation site with the upstream face at cm .the radiation field was defined by a 8-cm - square 5-cm - thick brass collimator whose downstream face was at 65 cm .two identical 3-cm - thick pmma plates were inserted .the downstream plate was attached to the beam - entrance face of the tank covering only the side to form a phantom system with a bump .the upstream plate was put with its downstream face at cm covering only the side to compensate the bump .such arrangement is typical for range compensation and sensitive to the detour effects ( kohno 2004 ) .these beam - customization elements were manually aligned to the nominal central axis at an uncertainty of 1 mm .a multichannel ionization chamber ( mcic ) with 96 vented sense volumes aligned at intervals of 2 mm along the axis was installed at in the water tank .the mcic system was electromechanically movable along the axis and the upstream limit at cm was chosen for the reference point with reference depth cm of equivalent water from the tank surface . with a reference open field without the pmma plates or the collimator , we measured reference dose / mu reading at reference height for every channel for a calibration purpose .every dose / mu reading of channel at height for any field is divided by corresponding reference reading to measure dose at position as where divergence - correction factor is to measure the doses in dose unit that would be the isocenter dose for the reference depth of the reference field .we then measured reference - field doses in the phantom at varied positions , from which we get tissue - phantom ratio beam range with gaussian modulation was equated to the distal 80%-dose depth ( koehler 1975 ) cm as shown in ( a ) . with indications for the measurement ( ) and the reference and 80%-dose depths ( and ) and ( b ) effective density and ( c ) effective lateral density gradient distributions in gray scale at in the calculation model.,width=491 ] with the collimator and the pmma plates in place , lateral dose profiles were measured in the same manner with particular interest around 3.3 cm , 6.8 cm , and 10.3 cm , where the bragg peaks were expected for the primary ions passing through none , either , and both of the pmma plates .shows range loss and scattering for the beam - line elements , and the resultant contributions to source sizes and estimated by back projection to the sources .the ridge filter with the base plate was modeled as plain aluminum of average thickness .the scattering for the scatterer was estimated from measured beam size 25 mm quadratically subtracted by pristine size in the distance of 425 cm .total range loss 2.10 cm was deduced from range 16.24 cm expected for mev carbon ions ( kanematsu 2008c ) and deficit 0.68 cm for the pristine beam may be attributed to minor materials in the beam line .lccccc element & & & & & + pristine & & 0.68 cm & & 8.3 mm & 8.3 mm + scatterer & 425 cm & 0.46 cm & 5.5 mrad & 5.6 mm & 2.5 mm + ridge filter & 235 cm & 0.96 cm & 3.2 mrad & 9.3 mm & 7.5 mm + total & & 2.10 cm & & 13.7 mm & 11.5 mm + [ tab : contributions ] as described in [ sec_appendix ] , pencil beams were defined to cover the collimated field at intervals of mm on the isocenter plane , where the open field was assumed to have uniform unit fluence .exact collimator modeling was omitted because we were interested in the density interface in the middle of the field .the upstream pmma plate was modeled as a range compensator with range loss cm for or for , where the original beams were generated , followed by the range loss and scattering .the phantom system comprised of the downstream pmma plate and the water tank was modeled as density voxels at grid intervals of mm for a 2-l volume of , , and .figures [ fig : model](b ) and [ fig : model](c ) show the density and lateral heterogeneity distributions .we carried out dose calculations with beam splitting enabled ( splitting calculation ) and disabled ( non - splitting calculation ) . in this geometry ,the density interface at was almost parallel to the beams and only ones in the two nearest columns would split . to examine effectiveness and efficiency of this method with larger heterogeneity , a 3-cm diameter cylindrical air cavity at and two 1-cm diameter bone rods with density at and added to the phantom in the calculation model .we carried out splitting and non - splitting dose calculations of the same carbon - ion radiation to monitor changes in frequencies of splitting modes , number of stopped beams , total path length , total effective volume in the heterogeneous phantom , and computational time with a 2-ghz powerpc g5/970 processor by apple / ibm .the splitting calculation could be more effective for protons because they generally suffer larger scattering .we thus carried out equivalent dose calculations for protons with enhanced scattering angle by factor 3.61 in otherwise the same configuration including the tissue - phantom - ratio data .shows the two - dimensional dose distributions measured in the carbon - ion beam experiment and the corresponding non - splitting and splitting calculations . shows their lateral profiles in the plateau and at depths for sub peak , main peak , and potential sub peak expected for particles that penetrated both , either , and none of the pmma plates .a dip / bump structure was commonly formed along the line for lateral particle disequilibrium ( goitein 1978 ) .there was actually a sub peak in the measurement and in the splitting calculation , while it was naturally absent in the non - splitting calculation .the observed loss of the main - peak component was also reproduced by the splitting calculation .the potential sub peak was barely noticeable only in the splitting calculation . by ( a ) measurement , ( b ) splitting calculation , and ( c ) non - splitting calculation.,width=491 ] by measurement ( ) , splitting calculation ( solid ) , and non - splitting calculation ( dashed ) at ( a ) 14.8 cm ( plateau ) , ( b ) 10.3 cm ( sub peak ) , ( c ) 6.8 cm ( main peak ) , and ( d ) 3.3 cm ( potential sub peak).,width=491 ] shows details of the heterogeneous phantom and the dose distributions by splitting calculation for the carbon - ion and proton radiations .the larger scattering for protons naturally led to the larger dose blurring .shows the dose profiles at the main peak and where the heterogeneity effects were large by splitting and non - splitting calculations .in addition to the loss of the main - peak component at , beam splitting caused some dose enhancement in the shoulders of the profiles especially for the carbon ions . of ( a ) density and ( b ) effective lateral density gradient in the calculation model and doses from ( c ) carbon - ion and ( d ) proton radiations calculated with splitting.,width=491 ] shows the statistical results , where the splitting effectively increased the carbon - ion and proton beams by factors of 27 and 25 in number , 20 and 25 in path length , 6.6 and 12 in volume , and 4.9 and 4.2 in total computation . lcccc projectile & & + beam splitting & no & yes & no & yes + frequency of & 0 & 0.243 & 0 & 3.813 + frequency of & 0 & 0.132 & 0 & 0.714 + frequency of & 0 & 1.636 & 0 & 0.967 + number of stopped beams & 1 & 26.8 & 1 & 25.0 + meanpath length / cm & 20.0 & 394.6 & 20.0 & 499.8 + mean effective volume/ & 3.52 & 23.1 & 30.8 & 380.4 + computational time / s & 9.3 & 45.8 & 15.3 & 63.8 + [ tab : statistics ]subdivision of a radiation field into virtual pencil beams is an arbitrary process in the pb algorithm although the beam sizes and intervals should be limited by lateral heterogeneity of a given system . in the pb - redefinition algorithm ( siu and hogstrom 1991 ) ,beams are defined in uniform rectilinear grids and hence regeneration in areas with little heterogeneity may be potentially wasteful . in the beam - splitting method, beams are automatically optimized in accordance with local heterogeneity . in other words ,the field will be covered by minimum number of beams in a density - modulated manner as a result of individual independent self - similar splitting .relative errors in similarity , ( ) , are maximum at with values , , and .the resultant dose errors would be smaller , due to contributions of other beams , and may be tolerable .effectiveness of the splitting method was demonstrated in the experiment .the most prominent detour effect was the loss of range - compensated main - peak component in ( c ) , which amounted to about 10% in dose and approximately as large as the distortion due to lateral particle disequilibrium .the splitting calculation and the measurement generally agreed well , considering that the experimental errors in device alignment could have been 1 mm or more .the potential sub peak for particles detouring around both pmma plates was not detected , which may be natural because detouring itself requires scattering .the dose resolution of the mcic system of about 1% of the maximum should have also limited the detectability . in the applications to the heterogeneous phantom model , although we do nt have reference data to compare the results with , it is natural that the splitting calculation with finer beams resulted in finer structures in the dose distributions .computational time is always a concern in practice . in our example , the slowing factor for beam splitting with respect to non - splitting calculation was almost common to carbon ions and protons and the speed performance , a minute for 2-l volume in 1-mm grids , may be already acceptable for clinical applications . in principle , the total path length determines the computational amount for path integrals and the total effective volume determines that for dose convolution . their influences on the actual computational time will depend on algorithmic implementations ( kanematsu 2008a ) .in fact , the slowing factor for splitting was less than 5 , which is even better than either estimation .in addition to common overhead that should have superficially reduced the factor , our code optimization with algorithmic techniques , which will be reported elsewhere , could have contributed to the performance .accuracy and speed also depend strongly on the cutoff parameters and logical conditions in the implemented algorithm , size and heterogeneity of a patient model , and resolution clinically needed for a dose distribution .the automatic multiplication of tracking elements resembles a shower process in physical particle interactions usually calculated in mc simulations .in fact , mc simulations for dose calculation share many things in common .transport and stacking of the elements are essentially the same and the probability for scattering may be equivalent to the distribution in the gaussian approximation . as far as efficiency is concerned ,the essential differences from the mc method are that the pb method deals with much less number of elements and that it does not rely on stochastic behavior of random numbers .the beam - splitting method is based on a simple principle of self - similarity and can be applied to any gaussian beam model of any particle type to fill the gap between monte carlo particle simulations and conventional beam calculations in terms of accuracy and efficiency .however , it is difficult for beam splitting or any beam model in general to deal with interactions that deteriorate particle uniformity , such as nuclear fragmentation processes ( matsufuji 2005 ) .in this work , we applied our finding of self - similar nature of gaussian distributions to dose calculation of heavy charged particle radiotherapy .the self - similarity enables dynamic , individual , and independent splitting of gaussian beams that have been grown larger than the limit from lateral heterogeneity of the medium . as a result, pencil beams will be arranged with optimum and modulated areal density to minimize overreaching and to address detouring with deflecting daughter beams . in comparison with a conventional calculation and a measurement ,the splitting calculation was prominently effective in the target region with steep range adjustment by an upstream range compensator .the detour effect was about 10% for the maximum and of the same order of magnitude with lateral particle disequilibrium effect . in comparison between carbon ions and protons , the effects of splitting were not significantly different because other scattering effects were also larger for protons .although performances depend strongly on physical beam conditions , clinical requirement , and algorithmic implementation , a typical slowing factor of the order of 10 may be reasonably achievable for involvement of beam splitting .in fact , factor of 5 has been achieved in our example .the principle and formulation for beam splitting are general and thus the feature may be added to various implementations of the pb algorithm in a straightforward manner .on generation of pencil beam on a plane at height as shown in , beam position , residual range , and variances , , and are initialized as where is the beam- origin , is the beam position on the isocenter plane , and are the source sizes at virtual source heights and , is the initial residual range , and is the beam direction radiating from the virtual sources with because nuclear interactions are effectively handled in tissue - phantom ratio in dose calculation , number of particles is modeled as invariant .the fermi - eyges theory ( eyges 1948 , kanematsu 2008c , 2009 ) gives increments of the pb parameters in step within a density voxel by \delta s , \label{eq : variance}\end{aligned}\ ] ] where and are the effective density ( kanematsu 2003 ) and radiation length of the medium in units of those of water and and are the particle charge and mass in units of those of a proton .for the last physical step with and diverging , the growth is directly given by and then disabled by in the unphysical region .kanematsu n , akagi t , futami y , higashi a , kanai t , matsufuji n , tomura h and yamashita h 1998 a proton dose calculation code for treatment planning based on the pencil beam algorithm _ jpnphys . _ * 18 * 88103 schaffner b , pedroni e and lomax a 1999 dose calculation models for proton treatment planning using a dynamic beam delivery system : an attempt to include density heterogeneity effects in the analytical dose calculation 2741 schaffner b , kanai t , futami y , shimbo m and urakabe e 2000 ridge filter design and optimization for the broad - beam three - dimensional irradiation system for heavy - ion radiotherapy _ med . phys . _* 27 * 71624 | the pencil - beam model is valid only when elementary gaussian beams are small enough with respect to lateral heterogeneity of a medium , which is not always the case in heavy charged particle radiotherapy . this work addresses a solution for this problem by applying our discovery of self - similar nature of gaussian distributions . in this method , gaussian beams split into narrower and deflecting daughter beams when their size has exceeded the lateral heterogeneity limit . they will be automatically arranged with modulated areal density for accurate and efficient dose calculations . the effectiveness was assessed in an carbon - ion beam experiment in presence of steep range compensation , where the splitting calculation reproduced the detour effect of imperfect compensation amounting up to about 10% or as large as the lateral particle disequilibrium effect . the efficiency was analyzed in calculations for carbon - ion and proton radiations with a heterogeneous phantom model , where the splitting calculations took about a minute and were factor of 5 slower than the non - splitting ones . the beam - splitting method is reasonably accurate , efficient , and general so that it can be potentially used in various pencil - beam algorithms . |
biological evolution presents a rich array of phenomena that involve nonlinear interactions between large numbers of units . as a consequence ,problems in evolutionary biology have recently enjoyed increasing popularity among statistical and computational physicists. however , many of the models used by physicists have unrealistic features that prevent the results from attracting significant attention from biologists . in this paperwe therefore develop and explore individual - based models of coevolution in predator - prey systems based on more realistic population dynamics than some earlier models. the author , together with r. k. p. zia , introduced a simplified form of the tangled - nature model of biological macroevolution , which was developed by jensen and collaborators. in these simplified models, the reproduction rates in an individual - based population dynamics with nonoverlapping generations provide the mechanism for selection between several interacting species .new species are introduced into the community through point mutations in a haploid , binary genome " of length , as in eigen s model for molecular evolution. the potential species are identified by the index ] , while the diagonal elements are zero .this model evolves toward mutualistic communities , in which all species are connected by mutually positive interactions. of greater biological interest is a predator - prey version of the model , called model b. in this case a small minority of the potential species ( typically 5% ) are primary producers , while the rest are consumers .the off - diagonal part of the interaction matrix is antisymmetric , with the additional restriction that a producer can not also prey on a consumer. in simulations we have taken and the nonzero as independent and uniformly distributed on ] is the metabolic efficiency of converting prey biomass to predator offspring .analogously , the functional response of a producer species toward the external resource is in both cases , if , then the consumption rate equals the resource ( or ) divided by the number of individuals of , thus expressing intraspecific competition for scarce resources . in the opposite limit , ,the consumption rate is proportional to the ratio of the specific , competition - adjusted resource to the competition - adjusted total available sustenance , .the total consumption rate for an individual of is therefore the birth probability is assumed to be proportional to the consumption rate , \ ; , \label{eq : bi}\ ] ] while the probability that an individual of avoids death by predation until attempting to reproduce is the total reproduction probability for an individual of species in this model is thus .we simulated the functional - response model over generations ( plus generations warm - up " ) for the following parameters : genome length ( potential species ) , external resource , fecundity , mutation rate , proportion of producers , interaction matrix with connectance and nonzero elements with a symmetric , triangular distribution over ] , where with for the case of all species , and analogously for the producers and consumers separately .the time series for both diversities and population sizes show intermittent behavior with quiet periods of varying lengths , separated by periods of high evolutionary activity . in this respect ,the results are similar to those seen for models a and b in earlier work. however , diverse communities in this model seem to be less stable than those produced by the linear models . in particular , this model has a tendency to flip randomly between an active phase with a diversity near ten , and a `` garden of eden '' phase of one or a few producers with a very low population of unstable consumers , such as the one seen around 10 million generations in fig . [fig : timser ] . a common method to obtain information about the intensity of fluctuations in a time series at different time scalesis the power - spectral density ( squared fourier transform ) , or psd .psds are presented in fig .[ fig : psd ] for the diversity fluctuations and the fluctuations in the population sizes ( fig .[ fig : psd](a ) ) and the intensity of extinction events ( fig .[ fig : psd](b ) ) .the former two are shown for the total population , as well as separately for the producers and consumers .all three are similar .extinction events are recorded as the number of species that have attained a population size greater than one , which go extinct in generation ( marked as species " in the figure ) , while extinction sizes are calculated by adding the maximum populations attained by all species that go extinct in generation ( marked as population " in the figure ) .the psds for all the quantities shown exhibit approximate behavior .for the diversities and population sizes , this power law extends over more than five decades in time .the extinction measures , on the other hand , have a large background of white noise for frequencies above generations , probably due to the high rate of extinction of unsuccessful mutants . for lower frequencies ,however , the behavior is consistent with noise within the limited accuracy of our results .( averaged over 16 generations ) falls continuously below some cutoff .the inset is a histogram of , showing a gaussian center with approximately exponential wings .the parabola in the foreground is a gaussian fit to this central peak .the cutoff values for the main figure , between 0.008 and 0.024 , were chosen on the basis of this distribution .the data in both parts of the figure are averaged over five independent simulation runs ., title="fig : " ] ( averaged over 16 generations ) falls continuously below some cutoff .the inset is a histogram of , showing a gaussian center with approximately exponential wings .the parabola in the foreground is a gaussian fit to this central peak .the cutoff values for the main figure , between 0.008 and 0.024 , were chosen on the basis of this distribution .the data in both parts of the figure are averaged over five independent simulation runs ., title="fig : " ] the evolutionary dynamics can also be characterized by histograms of characteristic time intervals , such as the time from creation till extinction of a species ( species lifetimes ) or the time intervals during which some indicator of evolutionary activity remains continuously below a chosen cutoff ( duration of evolutionarily quiet periods ) .histograms of species lifetimes are shown in fig .[ fig : time](a ) . as our indicator of evolutionary activitywe use the magnitude of the logarithmic derivative of the diversity , , and histograms for the resulting durations of quiet periods , calculated with different cutoffs , are shown in fig .[ fig : time](b ) .both quantities display approximate power - law behavior with an exponent near , consistent with the behavior observed in the psds. it is interesting to note that the distributions for these two quantities for this model have approximately the same exponent .this is consistent with the previously studied , mutualistic model a, but not with the predator - prey model b. we believe the linking of the power laws for the species lifetimes and the duration of quiet periods indicate that the communities formed by the model are relatively fragile , so that all member species tend to go extinct together in a mass extinction . "in contrast , model b produces simple food webs that are much more resilient against the loss of a few species , and as a result the distribution of quiet - period durations decays with an exponent near .( a ) . * ( b ) * time series of the number of new species that have reached a population greater than 1000 ( lower curve ) and greater than 100 ( upper curve ) .the inset shows the intermittent structure of the upper curve on a very fine scale of 2000 generations .see discussion in the text ., title="fig : " ] ( a ) . *( b ) * time series of the number of new species that have reached a population greater than 1000 ( lower curve ) and greater than 100 ( upper curve ) .the inset shows the intermittent structure of the upper curve on a very fine scale of 2000 generations .see discussion in the text ., title="fig : " ] the model studied above is one in which species forage indiscriminately over all available resources , with the output only limited by competition . also , there is an implication that an individual s total foraging effort increases proportionally with the number of species to which it is connected by a positive .a more realistic picture would be that an individual s total foraging effort is constant and can either be divided equally , or concentrated on richer resources .this is known as adaptive foraging . while one can go to great length devising optimal foraging strategies, we here only use a simple scheme , in which individuals of show a preference for prey species , based on the interactions and population sizes ( uncorrected for interspecific competition ) and given by and analogously for by the total foraging effortis thus .the preference factors are used to modify the reproduction probabilities by replacing all occurrences of by and of by in eqs .( [ eq : neff][eq : phiir ] ) .the results of implementing the adaptive foraging are quite striking .the system appears now to have a metastable low - diversity phase similar to the active phase of the non - adaptive model , from which it switches at a random time to an apparently stable high - diversity phase with much smaller fluctuations . as seen in fig .[ fig : adap](a ) , the switchover is quite abrupt , and fig . [fig : adap](b ) shows that it is accompanied by a sudden reduction in the rate of creation of new species . as seen in fig .[ fig : adappsd ] , the psds for both the diversities and population sizes in both phases show approximate noise for frequencies above generations . for lower frequencies , the metastable phase shows no discernible frequency dependence , while for the stable phase , the frequency dependence continues at least another decade .it thus appears that long - time correlations are not seen beyond generations for the metastable phase , and probably not beyond about generations for the stable one .these observations are consistent with species - lifetime distributions for both phases ( not shown ) , which are quite similar to those for the non - adaptive model , but typically with cutoffs in the range of to generations , much shorter than the total simulation times .noise for frequencies above about , but the psds appear to approach constant levels for the lowest frequencies ., title="fig : " ] noise for frequencies above about , but the psds appear to approach constant levels for the lowest frequencies ., title="fig : " ] in fact , the system can also escape from the low - diversity phase to total extinction , which is an absorbing state , and in some of our simulation runs we avoided this by limiting to less than 0.9 .this restriction does not seem to have any effect on the dynamics in the high - diversity phase .these results are preliminary , and it is possible that the high - diversity phase corresponds to a mutational meltdown. more research is clearly needed regarding the effects of adaptive foraging in this model .in this paper we have shown that very complex and diverse dynamical behavior results , even from highly over - simplified models of biological macroevolution .in particular , psds that show -like noise and power - law lifetime distributions for species as well as evolutionarily quiet states are generally seen .this is the case , both in the analytically tractable , but somewhat unrealistic tangled - nature type models , and in the nonlinear predator - prey models based on the more realistic holling type ii functional response . particularly intriguingis the appearance of a new , stable high - diversity phase in the latter type of model when adaptive foraging behavior is included . among the many questions about this new phase that remain to be addressedis the structure of the resulting community food webs .supported in part by u.s .national science foundation grant nos.dmr-0240078 and dmr-0444051 and by florida state university through the school of computational science , the center for materials research and technology , and the national high magnetic field laboratory .p. a. rikvold and r. k. p. zia , in _computer simulation studies in condensed matter physics xvi _ , edited by d. p. landau , s. p. lewis , and h .- b .schttler ( springer - verlag , berlin , 2004 ) , pp . 3437 .r. k. p. zia and p. a. rikvold , j. phys .a * 37 * , 5135 ( 2004 ) .p. a. rikvold , in _ noise in complex systems and stochastic dynamics iii _ , edited by l. b. kish , k. lindenberg , and z. gingl ( spie , the international society for optical engineering , bellingham , wa , 2005 ) , pp .148155 , e - print arxiv : q - bio.pe/0502046 .v. sevim and p. a. rikvold , in _computer simulation studies in condensed matter physics xvii _ , edited by d. p. landau , s. p. lewis , and h .- b .schttler ( springer - verlag , berlin , 2005 ) , pp .. v. sevim and p. a. rikvold , j. phys .a * 38 * , 9475 ( 2005 ) .n. d. martinez , r. j. williams , and j. a. dunne , in _ ecological networks : linking structure to dynamics in food webs _ , edited by m. pasqual and j. a. dunne ( oxford university press , oxford , 2006 ) , pp . | we explore the complex dynamical behavior of simple predator - prey models of biological coevolution that account for interspecific and intraspecific competition for resources , as well as adaptive foraging behavior . in long kinetic monte carlo simulations of these models we find quite robust -like noise in species diversity and population sizes , as well as power - law distributions for the lifetimes of individual species and the durations of quiet periods of relative evolutionary stasis . in one model , based on the holling type ii functional response , adaptive foraging produces a metastable low - diversity phase and a stable high - diversity phase . |
it is well known that transport problems on the line involving convex cost functions have explicit solutions , consisting in a monotone rearrangement . recently, an efficient method has been introduced to tackle this issue on the circle . in this notewe introduce an algorithm that enables to tackle optimal transport problems on the line ( but actually also on the circle ) with concave costs .our algorithm complements the method suggested by mccann .mccann considers general real values of supply and demand and shows how the problem can be reduced to convex optimization somewhat similar to the simplex method in linear programming .our approach as presented here is developed for the case of unit masses and is closer to the purely combinatorial approach of , but extends it to a general concave cost function .the extension to integer masses will be presented in . +the method we propose is based on a class of local indicators , that allow to detect consecutive points that are matched in an optimal transport plan .thanks to the low number of evaluations of the cost function required to apply the indicators , we derive an algorithm that finds an optimal transport plan in operations in the worst case . in practice , the computational cost of this method appears to behave linearly with respect to .+ since the indicators apply locally , the algorithm can be massively parallelized and also allows to treat optimal transport problems on the circle . in this way, it extends the work of aggarwal _ et al . _ in which cost functions have a linear dependence in the distance .for , consider and two sets of points in that represent respectively demand and supply locations .the problem we consider in this note consists in minimizing the transport cost where is a permutation of .this permutation forms a _ transport plan_. + we focus on the case where the function involves a concave function as stated in the next definition .[ def : cost ] the _ cost function _ in ( [ eq:4 ] ) is defined on by with , where is a concave non - decreasing real - valued function of a real positive variable such that . some examples of such costs are given by with , and or is with .+ finally , we denote by the permutation associated to a given optimal transport plan between and : for all permutation of , this section , we present a way to build a particular partition of the set .+ consider two pairs of matched points and , say e.g. , .it is easy to prove that the following alternative holds : 1 .\cap [ p_{i'},q_{\sigma^\star(i')})]= \emptyset ] or \subset[p_i , q_{\sigma^\star(i)}] ] , and the mean of the number of evaluations of has been computed .the results are shown on fig .+ the best case consists in finding a negative indicator at each step , and the worst corresponds to the case where all the indicators are positive . these two cases require respectively and evaluations of . + [ c][t]in - line evaluations of [ c][b]number of pairs of points [ l][c ] : , [ l][c ] : , [ l][c ] : , [ l][c ] : worst case , [ l][c] [ c][c] [ c][c] [ c][c] [ c][c] [ c][c] [ c][c] [ c][c] [ c][c] [ c][c] is the slope of the log - log graphs.,title="fig : " ] 00 a. aggarwal , a. bar - noy , s. khuller , d. kravets , and b. schieber .efficient minimum cost matching using quadrangle inequality . in _foundations of computer science , 1992 .proceedings of 33rd annual symposium _ , pages 583592 , 1992 . | in this note , we introduce a class of indicators that enable to compute efficiently optimal transport plans associated to arbitrary distributions of demands and supplies in in the case where the cost function is concave . the computational cost of these indicators is small and independent of . a hierarchical use of them enables to obtain an efficient algorithm . |
this paper deals with the study of the asymptotic behavior of the solution of poisson equation in a bounded domain of ( ) consisting of two sub - domains separated by a thin layer of thickness ( destined to tend to 0 ) .the mesh of these thin geometries presents numerical instabilities that can severely damage the accuracy of the entire process of resolution . to overcome this difficulty, we adopt asymptotic methods to model the effect of the thin layer by problems with either appropriate boundary conditions when we consider a domain surrounded by a thin layer ( see for instance ) or , as in this paper , with suitable transmission conditions on the interface ( see for instance ) .although this type of conditions has been widely studied , there is still a lot to be understood concerning the effects of thin shell and their modelisation .our motivation comes from , in which the authors have worked on problems of electromagnetic and biological origins .we cite for example that of poignard ( * ? ? ?* chapter 2 ) .he considered a cell immersed in an ambient medium and studied the electric field in the transverse magnetic ( tm ) mode at mid - frequency and from which our problem was inspired .let us give now precise notations .let be a bounded domain of ( ) consisting of three smooth sub - domains : an open bounded subset with regular boundary , an exterior domain with disjoint regular boundaries and , and a membrane ( thin layer ) of thickness separating from (see fig .[ fig1 ] ) .define the piecewise regular function by {ll}\alpha_{e } & \text{if } x\in\omega_{e,\delta},\\ \alpha_{\delta } & \text{if } x\in\omega_{\delta},\\ \alpha_{i } & \text{if } x\in\omega_{i,\delta } , \end{array } \right.\ ] ] where and are strictly positive constants satisfying or which correspond to the case of mid - diffusion . for a given in we are interested in the unique solution in of the following diffusion problem [ 1]{ll}-div\left ( \alpha\nabla u_{\delta}\right ) = f & \text{in } \omega,\\ u_{\delta|\partial\omega}=0 & \text{on } \partial\omega , \end{array } \right . \label{1.01}\ ] ] with transmission conditions on the interfaces{ll}u_{d,\delta|\gamma_{\delta,2}}=u_{e,\delta|\gamma_{\delta,2 } } & \text{on } \gamma_{\delta,2},\\ \alpha_{\delta}\partial_{\mathbf{n}_{\delta,2}}u_{d,\delta|\gamma_{\delta,2}}=\alpha_{e}\partial_{\mathbf{n}_{\delta,2}}u_{e,\delta|\gamma_{\delta,2 } } & \text{on } \gamma_{\delta,2},\\ u_{i,\delta|\gamma_{\delta,1}}=u_{d,\delta|\gamma_{\delta,1 } } & \text{on } \gamma_{\delta,1},\\ \alpha_{i}\partial_{\mathbf{n}_{\delta,1}}u_{i,\delta|\gamma_{\delta,1}}=\alpha_{\delta}\partial_{\mathbf{n}_{\delta,1}}u_{d,\delta|\gamma_{\delta , 1 } } & \text{on } \gamma_{\delta,1 } , \end{array } \right . \label{1.02}\ ] ] where and denote the derivatives in the direction of the unit normal vectors and to and respectively ( see fig .[ fig1 ] ) . the main result of this paper is to approximate the solution of problem ( [ 1 ] ) by a solution of a problem involving poisson equation in with two sub - domains separated by an arbitrary interface between and ( see fig . [ fig2 ] and fig .[ fig3 ] ) , with transmission conditions of order two on , modeling the effect of the thin layer .however , it seems that the existence and uniqueness of the solution of this problem are not obvious therefore , we rewrite the problem into a pseudodifferential equation ( cf . ) and show that in the case of mid - diffusion , we can find the appropriate position of the surface to solve this equation .the cases 3d and 2d are similar .we treat the three - dimensional case and the two dimensional one comes as a remark .the present paper is organized as follows . in section 2 ,we give the statement of the model problem considered . in section 3 ,we collect basic results of differential geometry of surfaces .sections 4 and 5 are devoted to the asymptotic analysis of our problem .we present , in section 4 , hierarchical variational equations suited to the construction of a formal asymptotic expansion up to any order , while section 5 focuses on the convergence of this ansatz . with the help of the asymptotic expansion of the solution , we model , in the last section , the effect of the thin layer by a problem with appropriate transmission conditions ., width=226 ] , width=132 ] we consider a parallel surface to and dividing into two thin layers and of thickness respectively and where and are nonnegative real numbers satisfying and such that and belong to a small neighborhood of ( see fig .[ fig2 ] and fig .[ fig3 ] ) .the term _ small _ neighborhood means that the constants and are not too close to or , in order to avoid having a layer too thin compared to the other because the following analysis does not lend itself to this case . under the aforementioned assumptions , we investigate in the solution of the following problem [ 1.1] with transmission conditions where denotes the derivative in the direction of the unit normal vector to ( outer for and inner for )the goal of this section is to define and to collect the main features of differential geometry ( see also ) in order to formulate our problem in a fixed domain ( independent of ) which is a key tool to determine the asymptotic expansion of the solution . in the sequel, greek indice takes the values 1 and 2 .let and we parameterize the thin shell by the manifold through the mapping defined by {rcl}\gamma\times i_{\delta,\beta } & \overset{\psi_{\beta}}{\rightarrow } & \omega_{\delta,\beta}\\ ( m,\eta_{\beta } ) & \rightarrow & x:=m+p_{\beta}\eta_{\beta}\mathbf{n}(m ) .\end{array } \right.\ ] ] as well - known , if the thickness of is small enough , is a -diffeomorphism of manifolds and it is also known ( * ? ? ?* remark 2.1 ) that the normal vector to can be identified to . to each function defined on , we associate the function defined on by{rl}\widetilde{v}_{\beta}(m,\eta_{\beta } ) & : = v_{\beta}(x),\\ x & = \psi_{\beta}\left ( m,\eta_{\beta}\right ) , \end{array } \right.\ ] ] then , we have where and are respectively the surfacic gradient of at and the curvature operator of at point the volume element on the thin shell is given by now , we introduce the scaling and the intervals and such that the -diffeomorphism defined by {rcl}\omega^{\beta}:=\gamma\times i_{\beta } & \overset{\phi_{\beta}}{\rightarrow } & \omega_{\delta,\beta}\\ ( m , s_{\beta } ) & \rightarrow & x:=m+\delta p_{\beta}s_{\beta}\mathbf{n}(m ) , \end{array } \right.\ ] ] parameterizes the thin shell to any function defined on , we associate the function } ] is defined by } \left ( u^{[\beta]},v^{[\beta]}\right ) & : = p_{\beta}\int_{\omega^{\beta}}j_{\delta,\beta}^{-2}\nabla_{\gamma}u^{[\beta]}.\nabla_{\gamma}v^{[\beta]}\det j_{\delta,\beta}\ d\gamma ds_{\beta}\nonumber\\ & + p_{\beta}^{-1}\delta^{-2}\int_{\omega^{\beta}}\partial_{s_{\beta}}u^{[\beta]}\partial_{s_{\beta}}v^{[\beta]}\det j_{\delta,\beta}\ d\gamma ds_{\beta},\label{10}\ ] ] for every } ] in in the spirit of , we will consider two asymptotic expansions .exterior expansions corresponding to the asymptotic expansion of restricted to and to and characterized by the ansatz where the terms and are independent of and defined on and on which are respectively the limits of and for they fulfill{ll}-div\left ( \alpha_{i}\nabla u_{i , n}\right ) = \delta_{0,n}f_{|\omega_{i } } & \text{in } \omega_{i}\text{,}\\ -div\left ( \alpha_{e}\nabla u_{e ,n}\right ) = \delta_{0,n}f_{|\omega_{e } } & \text{in } \omega_{e}\text{,}\\ u_{e , n|\partial\omega}=0 & \text{on}\ \partial\omega , \end{array } \right . \label{15}\ ] ] where indicates the kronecker symbol , and an interior expansion corresponding to the asymptotic expansion of written in a fixed domain and defined by the ansatz } = u_{0}^{\left [ \beta\right ] } + \delta u_{1}^{\left [ \beta\right ] } + \cdots,\text { in } \omega^{\beta},\label{16}\ ] ] where the terms } , \n\in\mathbb{n} ] admits the expansion } \left ( .,.\right ) & = \delta^{-2}a_{0,2}^{\left [ \beta\right ] } + \delta^{-1}a_{1,2}^{\left [ \beta\right ] } + \left ( a_{2,2}^{\left [ \beta\right ] } + a_{0,1}^{\left [ \beta\right ] } \right ) + \delta a_{1,1}^{\left [ \beta\right ] } + \cdots\nonumber\\ & + \delta^{n-1}a_{n-1,1}^{\left [ \beta\right ] } + \delta^{n}r_{n}^{\left [ \beta\right ] } \left ( \delta;.,.\right ) , \label{21}\ ] ] where the forms } ] is the remainder of expansion ( [ 21 ] ) and is expressed by } ( \delta;u^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } ) : = \int_{\omega^{\beta}}\left ( b_{n,\delta}+2\mathcal{h}b_{n-1,\delta}+\mathcal{k}b_{n-2,\delta}\right ) s_{\beta}^{n}\nabla_{\gamma } u^{\left [ \beta\right ] } .\nabla_{\gamma}v^{\left [ \beta\right ] } d\gamma ds_{\beta},\ ] ] with{l}\left ( -\mathcal{r}\right ) ^{n}\left ( nj_{\delta,\beta}^{-1}+j_{\delta , \beta}^{-2}\right ) \text { if}\ n\geq0,\\ j_{\delta,\beta}^{-2}\text { otherwise.}\end{array } \right.\ ] ] in the two - dimensional case , with the help of ( [ 6 ] ) , expansion ( [ 21 ] ) turns into } \left ( .,.\right ) = \delta^{-2}a_{0,2}^{\left [ \beta\right ] } + \delta^{-1}a_{1,2}^{\left [ \beta\right ] } + a_{0,1}^{\left [ \beta\right ] } + \delta a_{1,1}^{\left [ \beta\right ] } + \cdots+\delta^{n-1}a_{n-1,1}^{\left [ \beta\right ] } + \delta^{n}r_{n}^{\left [ \beta\right ] } \left ( \delta;.,.\right ) , \ ] ] with } \left ( u^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) : = \int_{\omega^{\beta}}p_{\beta}^{n-1}\left ( s_{\beta}\mathcal{r}\right ) ^{n}\partial_{s_{\beta}}u^{\left [ \beta\right ] } \partial_{s_{\beta}}v^{\left [ \beta\right ] } \ d\gamma ds_{\beta},\\ a_{n,1}^{\left [ \beta\right ] } \left ( u^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) : = \int_{\omega^{\beta}}p_{\beta}^{n+1}\left ( -s_{\beta}\mathcal{r}\right ) ^{n}\partial_{t}u^{\left [ \beta\right ] } \partial_{t}v^{\left [ \beta\right ] }\ d\gamma ds_{\beta},\\ r_{n}^{\left [ \beta\right ] } \left ( \delta;u^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) : = \int_{\omega^{\beta}}j_{\delta,\beta } ^{-1}\left ( -s_{\beta}\mathcal{r}\right ) ^{n}\partial_{t}u^{\left [ \beta\right ] } \partial_{t}v^{\left [ \beta\right ] } \ d\gamma ds_{\beta}.\end{gathered}\ ] ] inserting expansion ( [ 21 ] ) in ( [ 18.3 ] ) and matching the same powers of we obtain the following variational equations , which hold for all } , v^{\left [ 2\right ] } \right ) ] be a given function in and let } ] is valued in the space of vectorial fields tangent to and also } \in l^{2}(\omega^{\beta}). ] of the variational equation } v^{\left [ \beta\right ] } : = \int _ { \omega^{\beta}}h^{\left [ \beta\right ] } \partial_{s_{\beta}}v^{\left [ \beta\right ] }\ d\gamma ds_{\beta}+\int_{\omega^{\beta}}k^{\left [ \beta\right ] } .\nabla_{\gamma}v^{\left [ \beta\right ] } \d\gamma ds_{\beta } = 0;\\ \forall v^{\left [ \beta\right ] } \in h^{1}(\omega^{\beta}),\ v^{\left [ \beta\right ] } ( .,0)=0,\end{gathered}\ ] ] is explicitly given by } \left ( m , s_{\beta}\right ) = \int_{s_{\beta}}^{(-1)^{\beta}}div_{\gamma}k^{\left [ \beta\right ] } \left ( m,\lambda \right ) \ d\lambda.\ ] ] moreover , if } ( .,0)\neq0, ] using ( [ 18 ] ) , ( [ 17 ] ) and ( [ 1.1.8 ] ) , we obtain } ( m , s_{1})=u_{0}^{\left [ 2\right ] } ( m , s_{2})=u_{e,0|\gamma},\ m\in\gamma.\label{27}\ ] ] the choice of such that } = 0 ] in ( [ 24 ] ) gives } ( \alpha_{\delta}u_{1}^{\left [ 2\right ] } -\alpha_{e}u_{e,0},v^{\left [ 2\right ] } ) = 0.\ ] ] we obtain } = \alpha_{e}\partial_{\mathbf{n}}u_{e,0|\gamma}.\label{29}\ ] ] therefore } ( m,0)d\gamma=\alpha_{e}\int_{\gamma}\partial_{\mathbf{n}}u_{e,0|\gamma } v^{\left [ 2\right ] } ( m,0)d\gamma.\ ] ] as } ( m,0)=v^{\left [ 2\right ] } ( m,0) ] in ( [ 25 ] ) gives } \left ( \alpha_{\delta}u_{2}^{\left [ 1\right ] } -\alpha_{i}u_{i,2},v^{\left [ 1\right ] } \right ) + a_{0,1}^{\left [ 1\right ] } \left ( \alpha_{\delta}u_{0}^{\left [ 1\right ] } -\alpha_{i}u_{i,0},v^{\left [ 1\right ] } \right ) = 0.\ ] ] we apply lemma [ lem1 ] with } & = p_{1}^{-1}\alpha_{\delta}\ \partial_{s_{1}}u_{2}^{\left [ 1\right ] } -p_{1}^{-1}\alpha_{i}u_{i,2}=p_{1}^{-1}\alpha_{\delta}\ \partial_{s_{1}}u_{2}^{\left [ 1\right ] } -\alpha_{i}\partial_{\mathbf{n}}u_{i,1|\gamma}-s_{1}p_{1}\partial_{\mathbf{n}}^{2}u_{i,0|\gamma},\\ k^{\left [ 1\right ] } & = p_{1}\nabla_{\gamma}\left ( p_{1}^{-1}u_{0}^{\left [ 1\right ] } -\alpha_{i}u_{i,0}\right ) = p_{1}\left ( \alpha_{\delta}-\alpha_{i}\right ) \nabla_{\gamma}u_{i,0|\gamma},\end{aligned}\ ] ] we find } ( m , s_{1})-\alpha_{i}\partial_{\mathbf{n}}u_{i,1|\gamma}-s_{1}p_{1}\partial_{\mathbf{n}}^{2}u_{i,0|\gamma}=-\left ( s_{1}+1\right ) p_{1}\left ( \alpha_{\delta}-\alpha_{i}\right ) \delta_{\gamma}u_{i,0|\gamma}.\ ] ] morover , for all } ] in ( [ 25 ] ) gives } \left ( \alpha_{\delta}u_{2}^{\left [ 2\right ] } -\alpha_{e}u_{e,2},v^{\left [ 2\right ] } \right ) + a_{0,1}^{\left [ 2\right ] } \left ( \alpha_{\delta}u_{0}^{\left [ 2\right ] } -\alpha_{e}u_{e,0},v^{\left [ 2\right ] } \right ) = 0.\ ] ] we apply lemma [ lem1 ] with } & = p_{2}^{-1}\alpha_{\delta}\ \partial_{s_{2}}u_{2}^{\left [ 2\right ] } -p_{2}^{-1}\alpha_{e}u_{e,2}=p_{2}^{-1}\alpha_{\delta}\ \partial_{s_{2}}u_{2}^{\left [ 2\right ] } -\alpha_{e}\partial_{\mathbf{n}}u_{e,1|\gamma}-s_{2}p_{2}\partial_{\mathbf{n}}^{2}u_{e,0|\gamma},\\ k^{\left [ 2\right ] } & = p_{2}\nabla_{\gamma}\left ( \alpha_{\delta}u_{0}^{\left [ 2\right ] } -\alpha_{e}u_{e,0}\right ) = p_{2}\left ( \alpha_{\delta}-\alpha_{e}\right ) \nabla_{\gamma}u_{e,0|\gamma},\end{aligned}\ ] ] we find } ( m , s_{2})-\alpha_{e}\partial_{\mathbf{n}}u_{e,1|\gamma}-s_{2}p_{2}\partial_{\mathbf{n}}^{2}u_{e,0|\gamma}=\left ( 1-s_{2}\right ) \left ( \alpha_{\delta}-\alpha_{e}\right ) \delta_{\gamma}u_{e,0|\gamma}.\ ] ] morover , for all } ] , it follows from ( [ 15 ] ) , ( [ 30a ] ) , ( [ 31 ] ) and theorem [ theo1 ] that is the unique solution of the following problem{ll}-div\left ( \alpha_{i}\nabla u_{i,1}\right ) = 0 & \text{in } \omega_{i},\\ -div\left ( \alpha_{e}\nabla u_{e,1}\right ) = 0 & \text{in } \omega_{e},\\ u_{e,1|\partial\omega}=0 & \text{on } \partial\omega , \end{array } \right.\ ] ] with transmission conditions on or \partial_{\mathbf{n}}u_{i,0|\gamma},\\ \alpha_{i}\partial_{\mathbf{n}}u_{i,1|\gamma}-\alpha_{e}\partial_{\mathbf{n}}u_{e,1|\gamma } & = \left [ p_{1}(\alpha_{\delta}-\alpha_{i})+p_{2}(\alpha_{\delta}-\alpha_{e})\right ] \delta_{\gamma}u_{i,0|\gamma}.\end{aligned}\ ] ]the process described in the previous section can be continued up to any order provided that the data are sufficiently regular .we can also estimate the error made by truncating the series after a finite number of terms .let be in we set{c}u_{d_{1},\delta}^{\left ( n\right ) } : = \sum\limits_{j=0}^{n}\delta^{j}u_{d_{1},j}\text { in } \omega_{\delta,1,}\\ u_{d_{2},\delta}^{\left ( n\right ) } : = \sum\limits_{j=0}^{n}\delta^{j}u_{d_{2},j}\text { in } \omega_{\delta,2 } , \end{array } \right.\ ] ] where } ( m , s_{\beta}); ] are solutions of equations ( [ 23])-([26 ] ) , we obtain } \left ( u_{n+1}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) -\left ( a_{2,2}^{\left [ \beta\right ] } + a_{0,1}^{\left [ \beta\right ] } \right ) \left ( u_{n}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) \right . \\ & -a_{1,1}^{\left [ \beta\right ] } \left ( u_{n-1}^{\left [ \beta\right ] } + \delta u_{n}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) -a_{2,1}^{\left [ \beta\right ] } \left ( u_{n-2}^{\left [ \beta\right ] } + \delta u_{n-1}^{\left [ \beta\right ] } + \delta^{2}u_{n}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) -\cdots\\ & \left .-a_{n-1,1}^{\left [ \beta\right ] } \left ( u_{1}^{\left [ \beta\right ] } + \cdots+\delta^{n-1}u_{n-1}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) -r_{n}^{\left [ \beta\right ] } \left ( \delta ; u_{1}^{\left [ \beta\right ] } + \cdots+\delta^{n}u_{n}^{\left [ \beta\right ] } , v^{\left [ \beta\right ] } \right ) \right\ } \\ & + \alpha_{i}\int_{\omega_{i,\delta}}\nabla r_{i , n}.\nabla v_{i}\ d\omega_{i,\delta}+\alpha_{e}\int_{\omega_{e,\delta}}\nabla r_{e , n}.\nabla v_{e}\ d\omega_{e,\delta}-\alpha_{\delta}\int_{\omega_{\delta}}\nabla \mathcal{p}r.\nabla v_{d}\ d\omega_{\delta}.\end{aligned}\ ] ] by the estimates based on the explicit expressions of the bilinear form } ( .,.)$ ] and those of propositions [ prop1 ] , we have } \right\vert _ { l^{2}(\omega^{\beta})}+\delta^{-1}\left\vert \partial_ { s_{\beta}}v^{\left [ \beta\right ] } \right\vert _ { l^{2}(\omega^{\beta})}+\left\vert v^{\left [ \beta\right ] } \right\vert _ { l^{2}(\omega^{\beta})}\right ) \\ & + c\delta^{n-1/2}\left ( \left\vert v_{i}\right\vert _ { h^{1}(\omega _ { i,\delta})}+\sum_{\beta=1}^{2}\left\vert v_{\beta}\right\vert _ { h^{1}(\omega_{\delta,\beta})}+\left\vert v_{e}\right\vert _ { h^{1}(\omega_{e,\delta } ) } \right ) .\end{aligned}\ ] ] since is small enough , we have , } \right\vert _ { l^{2}(\omega^{\beta})}+\delta ^{\frac{-1}{2}}\left\vert \partial_{s_{\beta}}v^{\left [ \beta\right ] } \right\vert _ { l^{2}(\omega^{\beta})}+\delta^{\frac{1}{2}}\left\vert v^{\left [ \beta\right ] } \right\vert _ { l^{2}(\omega^{\beta})}\right ) \\ & + c\delta^{n-1/2}\left ( \left\vert v_{i}\right\vert_ { h^{1}(\omega _ { i,\delta})}+\sum_{\beta=1}^{2}\left\vert v_{\beta}\right\vert _ { h^{1}(\omega_{\delta,\beta})}+\left\vert v_{e}\right\vert _ { h^{1}(\omega_{e,\delta } ) } \right ) .\end{aligned}\ ] ] therefore since is in we set in ( [ 36.1 ] ) we obtain thanks to proposition [ prop1 ] , we find moreover , since and are and for every integer , we have and , therefore ( see ) this completes the proof .this section is devoted to the approximation of by a solution of a problem modelling the effect of the thin layer with a precision of order two in we truncate the series defining the asymptotic expansions , keeping only the first two terms } ( m , s_{1})+\delta u_{1}^{\left [ 1\right ] } ( m , s_{1}),\ \forall x=\phi_{1}(m , s_{1})\in\omega_{\delta,1},\\ u_{d_{2},\delta}(x ) & \simeq u_{d_{2},\delta}^{\left ( 1\right ) } ( m , s_{2}):=u_{0}^{\left [ 2\right ] } ( m , s_{2})+\delta u_{1}^{\left [ 2\right ] } ( m , s_{2}),\ \forall x=\phi_{2}(m , s_{2})\in\omega_{\delta,2},\end{aligned}\ ] ] where is the solution of{ll}-div\left ( \alpha_{i}\nabla u_{i,\delta}^{\left ( 1\right ) }\right ) = f_{|\omega_{i } } & \text{in } \omega_{i},\\ -div\left ( \alpha_{e}\nabla u_{e,\delta}^{\left ( 1\right ) } \right ) = f_{|\omega_{e } } & \text{in } \omega_{e},\\ u_{i,\delta|\gamma}^{\left ( 1\right ) } -u_{e,\delta|\gamma}^{\left ( 1\right ) } = \delta\mathcal{a}\left ( u_{i,\delta}^{\left ( 1\right ) } \right ) -\delta^{2}\xi_{\delta } & \text{on } \gamma,\\ \alpha_{i}\partial_{\mathbf{n}}u_{i,\delta|\gamma}^{\left ( 1\right ) } -\alpha_{e}\partial_{\mathbf{n}}u_{e,\delta|\gamma}^{\left ( 1\right ) } = \delta\mathcal{b}\left ( u_{i,\delta}^{\left ( 1\right ) } \right ) -\delta^{2}\rho_{\delta } & \text{on}\ \gamma,\\ u_{\delta|\partial\omega}^{\left ( 1\right ) } = 0 & \text{on } \partial\omega , \end{array } \right .\label{41}\ ] ] with \left ( \partial_{\mathbf{n}}u_{|\gamma}\right ) , \\ \mathcal{b}\left ( u\right ) & : = \left [ p_{1}(\alpha_{\delta}-\alpha _ { i})+p_{2}(\alpha_{\delta}-\alpha_{e})\right ] \delta_{\gamma}u_{|\gamma},\\ \xi_{\delta } & : = \left [ p_{1}(1-\alpha_{i}\alpha_{\delta}^{-1})+p_{2}(\alpha_{i}\alpha_{e}^{-1}-\alpha_{i}\alpha_{\delta}^{-1})\right ] \partial_{\mathbf{n}}u_{i,1|\gamma},\\ \rho_{\delta } & : = \left [ p_{1}(\alpha_{\delta}-\alpha_{i})+p_{2}(\alpha_{\delta}-\alpha_{e})\right ] \delta_{\gamma}u_{i,1|\gamma}.\end{aligned}\ ] ] let be the solution of ( [ 41 ] ) with and .we obtain a problem with transmission conditions of order equal to that of the differential operator .the new transmission conditions on are defined by {l}u_{i,\delta|\gamma}^{ap}-u_{e,\delta|\gamma}^{ap}=\delta\left [ p_{1}(1-\alpha_{i}\alpha_{\delta}^{-1})+p_{2}(\alpha_{i}\alpha_{e}^{-1}-\alpha _ { i}\alpha_{\delta}^{-1})\right ] \partial_{\mathbf{n}}u_{i,\delta|\gamma } ^{ap},\\ \alpha_{i}\partial_{\mathbf{n}}u_{i,\delta|\gamma}^{ap}-\alpha_{e}\partial_{\mathbf{n}}u_{e,\delta|\gamma}^{ap}=\delta\left [ p_{1}(\alpha_{\delta}-\alpha_{i})+p_{2}(\alpha_{\delta}-\alpha_{e})\right ] \delta_{\gamma}u_{i,\delta|\gamma}^{ap}. \end{array } \right .\label{42}\ ] ] however , the bilinear form associated to problem is neither positive nor negative .then the existence and uniqueness of the solution are not ensured by the lax milgram lemma .therefore , we reformulate problem into a nonlocal equation on the interface ( cf .a direct use of transmission conditions ( [ 42 ] ) leads to an operator which is not self - adjoint .so , we choose the position of in such a way that the jump of the trace of the solution on is null .we put we obtain which is valid only when or this corresponds to the case of mid - diffusion . transmission conditions ( [ 42 ] )become {l}u_{i,\delta|\gamma}^{ap}-u_{e,\delta|\gamma}^{ap}=0,\\ \alpha_{i}\partial_{\mathbf{n}}u_{i,\delta|\gamma}^{ap}-\alpha_{e}\partial_{\mathbf{n}}u_{e,\delta|\gamma}^{ap}=\delta\dfrac{\left ( \alpha _ { e}-\alpha_{\delta}\right ) \left ( \alpha_{i}-\alpha_{\delta}\right ) } { \alpha_{\delta}}\delta_{\gamma}u_{i,\delta}^{ap}. \end{array } \right.\ ] ] after , we remove the right - hand side of problem by a standard lift : let be in such that then the function solves the following problem{ll}-div\left ( \alpha_{i}\nabla\psi_{i}\right ) = 0 & \text{in } \omega_{i},\\ -div\left ( \alpha_{e}\nabla\psi_{e}\right ) = 0 & \text{in } \omega_{e},\\ \psi_{i|\gamma}-\psi_{e|\gamma}=0 & \text{on } \gamma,\\ \alpha_{i}\partial_{\mathbf{n}}\psi_{i|\gamma}-\alpha_{e}\partial_{\mathbf{n}}\psi_{e|\gamma}-\delta\dfrac{\left ( \alpha_{e}-\alpha_{\delta}\right ) \left ( \alpha_{i}-\alpha_{\delta}\right ) } { \alpha_{\delta}}\delta_{\gamma } \psi_{i|\gamma}=g & \text{on}\ \gamma,\\ \psi_{e|\partial\omega}=0 & \text{on } \partial\omega , \end{array } \right.\ ] ] where we introduce the steklov - poicar operators and ( called also dirichlet - to - neumann operators ) defined from onto by where is the solution of the boundary value problem{ll}-\delta u_{i}=0 & \text{in } \omega_{i},\\ u_{i|\gamma}=\varphi & \text{on } \gamma , \end{array } \right.\ ] ] and by where is the solution of the boundary value problem {ll}-\delta u_{e}=0 & \text{in } \omega_{e},\\ u_{e|\gamma}=\psi & \text{on } \gamma,\\ u_{e|\partial\omega}=0 & \text{on } \partial\omega . \end{array } \right.\ ] ] then is equivalent to the boundary equation where is the trace of on the surface we are now in position to state the existence and uniqueness theorem , which proof is similar to that of theorem 2.5 in .[ theo3]the operator is an elliptic self - adjoint semi - bounded from below pseudodifferential operator of order 2 .moreover , there exists series growing to infinity such that for any with , we have the following : 1 . if then equation admits a unique solution in which , in addition , belongs to 2 . if then there is either no solution or a complete affine finite dimensional space of solutions .finally , we give an error estimate between the solution of ( [ 1 ] ) and the approximate solution let us define on , ap}(m , s_{1}):=u_{i,\delta|\gamma}^{ap}+\delta\frac{\alpha _ { i}\left ( \alpha_{e}-\alpha_{\delta}\right ) } { \alpha_{\delta}\left ( \alpha_{e}-\alpha_{i}\right ) } \left [ ( s_{1}+1)\alpha_{i}\alpha_{\delta}^{-1}-1\right ] \partial_{\mathbf{n}}u_{i,\delta|\gamma}^{ap},\\ u_{d_{2},\delta}^{ap}(x ) & : = u_{d_{2},\delta}^{\left [ 2\right ] , ap}\left ( m , s_{2}\right ) : = u_{e,\delta|\gamma}^{ap}+\delta\frac{\alpha_{e}\left ( \alpha_{\delta}-\alpha_{i}\right ) } { \alpha_{\delta}\left ( \alpha_{e}-\alpha_{i}\right ) } \left [ ( s_{2}-1)\alpha_{e}\alpha_{\delta}^{-1}+1\right ] \partial_{\mathbf{n}}u_{e,\delta|\gamma}^{ap},\end{aligned}\ ] ] and let us denote by the approximate solution defined on {cc}u_{i,\delta}^{ap } & \text{in } \omega_{i,\delta},\\ u_{d_{\beta},\delta}^{ap } & \text{in } \omega_{\delta,\beta},\\ u_{e,\delta}^{ap } & \text{in } \omega_{e,\delta}. \end{array } \right.\ ] ] we can now formulate our main result . according to the convergence theorem, it is sufficient to estimate the error .therefore , as in , we perform an asymptotic expansion for .the ansatz where and , gives the recurrence relations{ll}-div\left( \alpha_{i}\nabla w_{i , j}\right ) = f_{|\omega_{i}}\delta_{j,0 } & \text{in } \omega_{i},\\ -div\left ( \alpha_{e}\nabla w_{e , j}\right ) = f_{|\omega_{e}}\delta_{j,0 } & \text{in } \omega_{e},\\ w_{i , j|\gamma}-w_{e , j|\gamma}=0 & \text{on } \gamma,\\ \alpha_{i}\partial_{\mathbf{n}}w_{i , j|\gamma}-\alpha_{e}\partial_{\mathbf{n}}w_{e , j|\gamma}=\dfrac{\left ( \alpha_{e}-\alpha_{\delta}\right ) \left ( \alpha_{i}-\alpha_{\delta}\right ) } { \alpha_{\delta}}\delta_{\gamma } w_{i , j-1|\gamma } & \text{on}\ \gamma,\\ w_{e , j|\partial\omega}=0 & \text{on } \partial\omega , \end{array } \right.\ ] ] with the convention that a simple calculation shows that the two first terms and coincide with the two first terms of ( [ 13 ] ) and ( [ 14 ] ) . furthermore , since and are each term of ( [ 46 ] ) is bounded in .then , by setting there exists , such as which gives the desired result .99 s. agmon , a. douglis and l. nirenberg , estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions .ii , comm . pure appl .17 ( 1964 ) 35 - 92 .a. bendali and k. lemrabet , asymptotic analysis of the scattering of a time - harmonic electromagnetic wave by a perfectly conducting metal coated with a thin dielectric shell , asymptot .57 ( 2008 ) 199 - 227 . c. poignard , about the transmembrane voltage potential of a biological cell in time - harmonic regime , in : mathematical methods for imaging and inverse problems , volume 26 of esaim proc .edp sci . les ulis ( 2009 ) 162 - 179 .c. poignard , mthodes asymptotiques pour le calcul des champs lectromagntiques dans des milieux couches minces . application aux cellules biologiques , thse de doctorat , universit claude bernard - lyon 1 , 2006 . | this work consists in the asymptotic analysis of the solution of poisson equation in a bounded domain of with a thin layer . we use a method based on hierarchical variational equations to derive asymptotic expansion of the solution with respect to the thickness of the thin layer . we determine the first two terms of the expansion and prove the error estimate made by truncating the expansion after a finite number of terms . next , using the first two terms of the asymptotic expansion , we show that we can model the effect of the thin layer by a problem with transmission conditions of order two . * keywords : * asymptotic analysis ; asymptotic expansion ; approximate transmission conditions ; thin layer ; poisson equation . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.