diff --git "a/laysummary/train_data.json" "b/laysummary/train_data.json" new file mode 100644--- /dev/null +++ "b/laysummary/train_data.json" @@ -0,0 +1,127 @@ +{"Unnamed: 0":1406,"id":"journal.pcbi.1006637","year":2018,"title":"Dynamical anchoring of distant arrhythmia sources by fibrotic regions via restructuring of the activation pattern","sections":"Many clinically relevant cardiac arrhythmias are conjectured to be organized by rotors ., A rotor is an extension of the concept of a reentrant source of excitation into two or three dimensions with an area of functional block in its center , referred to as the core ., Rapid and complex reentry arrhythmias such as atrial fibrillation ( AF ) and ventricular fibrillation ( VF ) are thought to be driven by single or multiple rotors ., A clinical study by Narayan et al . 1 indicated that localized rotors were present in 68% of cases of sustained AF ., Rotors ( phase singularities ) were also found in VF induced by burst pacing in patients undergoing cardiac surgery 2 , 3 and in VF induced in patients undergoing ablation procedures for ventricular arrhythmias 4 ., Intramural rotors were also reported in early phase of VF in the human Langendorff perfused hearts 5 , 6 ., It was also demonstrated that in most cases rotors originate and stabilize in specific locations 4\u20138 ., A main mechanism of rotor stabilization at a particular site in cardiac tissue was proposed in the seminal paper from the group of Jalife 9 ., It was observed that rotors can anchor and exhibit a stable rotation around small arteries or bands of connective tissue ., Later , it was experimentally demonstrated that rotors in atrial fibrillation in a sheep heart can anchor in regions of large spatial gradients in wall thickness 10 ., A recent study of AF in the right atrium of the explanted human heart 11 revealed that rotors were anchored by 3D micro-anatomic tracks formed by atrial pectinate muscles and characterized by increased interstitial fibrosis ., The relation of fibrosis and anchoring in atrial fibrillation was also demonstrated in several other experimental and numerical studies 8 , 11\u201314 ., Initiation and anchoring of rotors in regions with increased intramural fibrosis and fibrotic scars was also observed in ventricles 5 , 7 , 15 ., One of the reasons for rotors to be present at the fibrotic scar locations is that the rotors can be initiated at the scars ( see e . g . 7 , 15 ) and therefore they can easily anchor at the surrounding scar tissue ., However , rotors can also be generated due to different mechanisms , such as triggered activity 16 , heterogeneity in the refractory period 16 , 17 , local neurotransmitter release 18 , 19 etc ., What will be the effect of the presence of the scar on rotors in that situation , do fibrotic areas ( scars ) actively affect rotor dynamics even if they are initially located at some distance from them ?, In view of the multiple observations on correlation of anchoring sites of the rotors with fibrotic tissue this question translates to the following: is this anchoring just a passive probabilistic process , or do fibrotic areas ( scars ) actively affect the rotor dynamics leading to this anchoring ?, Answering these questions in experimental and clinical research is challenging as it requires systematic reproducible studies of rotors in a controlled environment with various types of anchoring sites ., Therefore alternative methods , such as realistic computer modeling of the anchoring phenomenon , which has been extremely helpful in prior studies , are of great interest ., The aim of this study is therefore to investigate the processes leading to anchoring of rotors to fibrotic areas ., Our hypothesis is that a fibrotic scar actively affects the rotor dynamics leading to its anchoring ., To show that , we first performed a generic in-silico study on rotor dynamics in conditions where the rotor was initiated at different distances from fibrotic scars with different properties ., We found that in most cases , scars actively affect the rotor dynamics via a dynamical reorganization of the excitation pattern leading to the anchoring of rotors ., This turned out to be a robust process working for rotors located even at distances more than 10 cm from the scar region ., We then confirmed this phenomenon in a patient-specific model of the left ventricle from a patient with remote myocardial infarction ( MI ) and compared the properties of this process with clinical ECG recordings obtained during induction of a ventricular arrhythmia ., Our anatomical model is based on an individual heart of a post-MI patient reconstructed from late gadolinium enhanced ( LGE ) magnetic resonance imaging ( MRI ) was described in detail previously 20 ., Briefly , a 1 . 5T Gyroscan ACS-NT\/Intera MR system ( Philips Medical Systems , Best , the Netherlands ) system was used with standardized cardiac MR imaging protocol ., The contrast \u2013gadolinium ( Magnevist , Schering , Berlin , Germany ) ( 0 . 15 mmol\/kg ) \u2013 was injected 15 min before acquisition of the LGE sequences ., Images were acquired with 24 levels in short-axis view after 600\u2013700 ms of the R-wave on the ECG within 1 or 2 breath holds ., The in-plane image resolution is 1 mm and through-plane image resolution is 5 mm ., Segmentation of the contours for the endocardium and the epicardium was performed semi-automatically on the short-axis views using the MASS software ( Research version 2014 , Leiden University Medical Centre , Leiden , the Netherlands ) ., The myocardial scar was identified based on signal intensity ( SI ) values using a validated algorithm as described by Roes et al . 21 ., In accordance with the algorithm , the core necrotic scar is defined as a region with SI >41% of the maximal SI ., Regions with lower SI values were considered as border zone areas ., In these regions , we assigned the fibrosis percentage as normalized values of the SI as in Vigmond et al . 22 ., In the current paper , fibrosis was introduced by generating a random number between 0 and 1 for each grid point and if the random number was less than the normalized SI at the corresponding pixel the grid point was considered as fibroblast ., Currently there is no consensus on how the SI values should be used for clinical assessment of myocardial fibrosis and various methods have been reported to produce significantly different results 23 ., However , the method from Vigmond et al . properly describes the location of the necrotic scar region in our model as for the fibrosis percentage of more than 41% we observe a complete block of propagation inside the scar ., This means that all tissue which has a fibrotic level higher than 41% behaves like necrotic scar ., The approach and the 2D model was described in detail in previous work 24\u201326 ., Briefly , for ventricular cardiomyocyte we used the ten Tusscher and Panfilov ( TP06 ) model 27 , 28 , and the cardiac tissue was modeled as a rectangular grid of 1024 \u00d7 512 nodes ., Each node represented a cell that occupied an area of 250 \u00d7 250 \u03bcm2 ., The equations for the transmembrane voltage are given by, C m d V i k d t = \u2211 \u03b1 , \u03b2 \u2208 { - 1 , + 1 } \u03b7 i k \u03b1 \u03b2 g gap ( V i + \u03b1 , k + \u03b2 - V i k ) - I ion ( V i k , \u2026 ) , ( 1 ), where Vik is the transmembrane voltage at the ( i , k ) computational node , Cm is membrane capacitance , ggap is the conductance of the gap junctions connecting two neighboring myocytes , Iion is the sum of all ionic currents and \u03b7 i k \u03b1 \u03b2 is the connectivity tensor whose elements are either one or zero depending on whether neighboring cells are coupled or not ., Conductance of the gap junctions ggap was taken to be 103 . 6 nS , which results in a maximum velocity planar wave propagation in the absence of fibrotic tissue of 72 cm\/s at a stimulation frequency of 1 Hz ., ggap was not modified in the fibrotic areas ., A similar system of differential equations was used for the 3D computations where instead of the 2D connectivity tensor \u03b7 i k \u03b1 \u03b2 we used a 3D weights tensor w i j k \u03b1 \u03b2 \u03b3 whose elements were in between 0 and 1 , depending both on coupling of the neighbor cells and anisotropy due to fiber orientation ., Each node in the 3D model represented a cell of the size of 250 \u00d7 250 \u00d7 250 \u03bcm3 ., 20s of simulation in 3D took about 3 hours ., Fibrosis was modeled by the introduction of electrically uncoupled unexcitable nodes 29 ., The local percentage of fibrosis determined the probability for a node of the computational grid to become an unexcitable obstacle , meaning that for high percentages of fibrosis , there is a high chance for a node to be unexcitable ., As previous research has demonstrated that LGE-MRI enhancement correlates with regions of fibrosis identified by histological examination 30 , we linearly interpolated the SI into the percentage of fibrosis for the 3D human models ., In addition , the effect of ionic remodeling in fibrotic regions was taken into account for several results of the paper 31 , 32 ., To describe ionic remodeling we decreased the conductance of INa , IKr , and IKs and depending on local fibrosis level as:, G Na = ( 1 - 1 . 55 f 100 % ) G Na 0 , ( 2 ) G Kr = ( 1 - 1 . 75 f 100 % ) G Kr 0 , ( 3 ) G Ks = ( 1 - 2 f 100 % ) G Ks 0 , ( 4 ), where GX is the peak conductance of IX ionic current , G X 0 is the peak conductance of the current in the absence of remodeling , and f is the local fibrosis level in percent ., These formulas yield a reduction of 62% for INa , of 70% for IKr , and of 80% for IKs if the local fibrosis f is 40% ., These values of reduction are , therefore , in agreement with the values published in 33 , 34 ., The normal conduction velocity at CL 1000 ms is 72 cm\/s ( CL 1000 ms ) ., However , as the compact scar is surrounded by fibrotic tissue , the velocity of propagation in that region gradually decreases with the increase in the fibrosis percentage ., For example for fibrosis of 30% , the velocity decreases to 48 cm\/s ( CL 1000 ms ) ., We refer to Figure 1 in Ten Tusscher et al 25 for the planar conduction velocity as a function of the percentage fibrosis in 2D tissue and 3D tissue ., The geometry and extent of fibrosis in the human left ventricles were determined using the LGE MRI data ., The normalized signal intensity was used to determine the density of local fibrosis ., The fiber orientation is presented in detail in the supplementary S1 Appendix ., The model for cardiac tissue was solved by the forward Euler integration scheme with a time step of 0 . 02 ms . The numerical solver was implemented using the CUDA toolkit for performing the computations on graphics processing units ., Simulations were performed on a GeForce GTX Titan Black graphics card using single precision calculations ., The eikonal equations for anisotropy generation were solved by the fast marching Sethian\u2019s method 35 ., The eikonal solver and the 3D model generation pipeline were implemented in the OCaml programming language ., Rotors were initiated by an S1S2 protocol , as shown in the supplementary S1 Fig . Similarly , in the whole heart simulations , spiral waves ( or scroll waves ) were created by an S1S2 protocol ., For the compact scar geometry used in our simulations the rotation of the spiral wave was stationary , the period of rotation of the anchored rotor was always more than 280 ms , while the period of the spiral wave was close to 220 msec ., Therefore , we determined anchoring as follows: if the period of the excitation pattern was larger than 280 ms over a measuring time interval of 320 ms we classified the excitation as anchored ., When the type of anchoring pattern was important ( single or multi-armed spiral wave ) we determined it visually ., If in all points of the tissue , the voltage was below -20 mV , the pattern was classified as terminated ., We applied the classification algorithm at t = 40 s in the simulation ., In the whole heart , the pseudo ECGs were calculated by assuming an infinite volume conductor and calculating the dipole source density of the membrane potential Vm in all voxel points of the ventricular myocardium , using the following equation 36, E C G ( t ) = \u222b ( r \u2192 , D ( r \u2192 ) \u2207 \u2192 V ( t ) ) | r \u2192 | 3 d 3 r ( 5 ), whereby D is the diffusion tensor , V is the voltage , and r \u2192 is the vector from each point of the tissue to the recording electrode ., The recording electrode was placed 10 cm from the center of the ventricles in the transverse plane ., Twelve-lead ECGs of all induced ventricular tachycardia ( VT ) of patients with prior myocardial infarction who underwent radiofrequency catheter ablation ( RFCA ) for monomorphic VT at LUMC were reviewed ., All patients provided informed consent and were treated according to the clinical protocol ., Programmed electrical stimulation ( PES ) is routinely performed before RFCA to determine inducibility of the clinical\/presumed clinical VT ., All the patients underwent PES and ablation according to the standard clinical protocol , therefore no ethical approval was required ., Ablation typically targets the substrate for scar-related reentry VT ., After ablation PES is repeated to test for re-inducibility and evaluate morphology and cycle length of remaining VTs ., The significance of non-clinical , fast VTs is unclear and these VTs are often not targeted by RFCA ., PES consisted of three drive cycle lengths ( 600 , 500 and 400 ms ) , one to three ventricular extrastimuli ( \u2265200 ms ) and burst pacing ( CL \u2265200 ms ) from at least two right ventricular ( RV ) sites and one LV site ., A positive endpoint for stimulation is the induction of any sustained monomorphic VT lasting 30 s or requiring termination ., ECG and intracardiac electrograms ( EG ) during PES were displayed and recorded simultaneously on a 48-channel acquisition system ( Prucka CardioLab EP system , GE Healthcare , USA ) for off-line analysis ., Fibrotic scars can not only anchor the rotors but can dynamically anchor them from a large distance ., In the first experiments we studied spiral wave dynamics with and without a fibrotic scar in a generic study ., The diameter of the fibrotic region was 6 . 4 cm , based on the similar size of the scars from patients with documented and induced VT ( see the Methods section , Magnetic Resonance Imaging ) ., The percentage of fibrosis changed linearly from 50% at the center of the scar to 0% at the scar boundary ., We initiated a rotor at a distance of 15 . 5 cm from the scar ( Fig 1 , panel A ) which had a period of 222 ms and studied its dynamics ., First , after several seconds the activation pattern became less regular and a few secondary wave breaks appeared at the fibrotic region ( Fig 1 , panel B ) ., These irregularities started to propagate towards the tip of the initial rotor ( Fig 1 , panel C-D ) creating a complex activation picture in between the scar and the initial rotor ., Next , one of the secondary sources reached the tip of the original rotor ( Fig 1 , panel E ) ., Then , this secondary source merged with the initial rotor ( Fig 1 , panel F ) , which resulted in a deceleration of the activation pattern and promoted a chain reaction of annihilation of all the secondary wavebreaks in the vicinity of the original rotor ., At this moment , a secondary source located more closely to the scar dominated the simulation ( Fig 1 , panel G ) ., The whole process now started again ( Fig 1 , panels H-K ) , until finally only one source became the primary source anchored to the scar ( Fig 1L ) with a rotation period of 307 ms . For clarity , a movie of this process is provided as supplementary S1 Movie ., Note that this process occurs only if a scar with surrounding fibrotic zone was present ., In the simulation entitled as \u2018No scar\u2019 in Fig 1 , we show a control experiment when the same initial conditions were used in tissue without a scar ., In the panel entitled as \u2018Necrotic scar\u2019 in Fig 1 , a simulation with only a compact region without the surrounding fibrotic tissue is shown ., In both cases the rotor was stable and located at its initial position during the whole period of simulation ., The important difference here from the processes shown in Fig 1 ( Fibrotic scar ) is that in cases of \u2018No scar\u2019 and \u2018Necrotic scar\u2019 no new wavebreaks occur and thus we do not have a complex dynamical process of re-arrangement of the excitation patterns ., We refer to this complex dynamical process leading to anchoring of a distant rotor as dynamical anchoring ., Although this process contains a phase of complex behaviour , overall it is extremely robust and reproducible in a very wide range of conditions ., In the second series of simulations , the initial rotor was placed at different distances from the scar border , ranging from 1 . 8 to 14 . 3 cm , to define the possible outcomes , see Fig 2 . Here , in addition to a single anchored rotor shown in Fig 1H we could also obtain other final outcomes of dynamical anchoring: we obtained rotors rotating in the opposite direction ( Fig 2A , top ) , double armed anchored rotors which had 2 wavefronts rotating around the fibrotic regions ( Fig 2A , middle ) or annihilation of the rotors ( Fig 2A , bottom , which show shows no wave around the scar ) , which normally occurred as a result of annihilation of a figure-eight-reentrant pattern ., To summarize , we therefore had the following possible outcomes:, Termination of activity A rotor rotating either clockwise or counter-clockwise A two- or three-armed rotor rotating either clockwise or counter-clockwise, Fig 2 , panel B presents the relative chance of the mentioned activation patterns to occur depending on the distance between the rotor and the border of the scar ., We see , indeed , that for smaller initial distances the resulting activation pattern is always a single rotor rotating in the same direction ., With increasing distance , other anchoring patterns are possible ., If the distance was larger than about 9 cm , there is at least a 50% chance to obtain either a multi-armed rotor or termination of activity ., Also note that such dynamical anchoring occurred from huge distances: we studied rotors located up to 14 cm from the scar ., However , we observed that even for very large distances such as 25 cm or more such dynamical anchoring ( or termination of the activation pattern ) was always possible , provided enough time was given ., We measured the time required for the anchoring of rotors as a function of the distance from the scar ., For each distance , we performed about 60 computations using different seed values of the random number generator , both with and without taking ionic remodeling into account ., The results of these simulations are shown in Fig 3 . We see that the time needed for dynamical anchoring depends linearly on the distance between the border of the scar and the initial rotor ., The blue and yellow lines correspond to the scar model with and without ionic remodeling , respectively ( ionic remodeling was modelled by decreasing the conductance of INa , IKr , and IKs as explained in the Methods Section ) ., We interpret these results as follows; The anchoring time is mainly determined by the propagation of the chaotic regime towards the core of the original rotor and this process has a clear linear dependency ., For distant rotors , propagation of this chaotic regime mainly occurs outside the region of ionic remodelling , and thus both curves in Fig 3 have the same slope ., However , in the presence of ionic remodelling , the APD in the scar region is prolonged ., This creates a heterogeneity and as a consequence the initial breaks in the scar region are formed about 3 . 5 s earlier in the scar model with remodeling compared with the scar model without remodeling ., To identify some properties of the substrate necessary for the dynamical anchoring we varied the size and the level of fibrosis within the scar and studied if the dynamical anchoring was present ., Due to the stochastic nature of the fibrosis layout we performed about 300 computations with different textures of the fibrosis for each given combination of the scar size and the fibrosis level ., The results of this experiment are shown in Fig 4 . Dynamical anchoring does not occur when the scar diameter was below 2 . 6 cm , see Fig 4 . For scars of such small size we observed the absence of both the breakup and dynamical anchoring ., We explain this by the fact that if the initial separation of wavebreaks formed at the scar is small , the two secondary sources merge immediately , repairing the wavefront shape and preventing formation of secondary sources 37 ., Also , we see that this effect requires an intermediate level of fibrosis density ., For small fibrosis levels no secondary breaks are formed ( close to the boundary of the fibrotic tissue ) ., Also , no breaks could be formed if the fibrosis level is larger than 41% in our 2D model ( i . e . closer to the core ) , as the tissue behaves like an inexcitable scar ., For a fibrosis > 41% the scar effectively becomes a large obstacle that is incapable of breaking the waves of the original rotor 37 ., Close to the threshold of 41% we have also observed another interesting pattern when the breaks are formed inside the core of the scar ( inside the > 41% region ) only and cannot exit to the surrounding tissue , see the supplementary S1 Movie ., Finally , note that Fig 4 illustrates only a few factors important for the dynamical anchoring in a simple setup in an isotropic model of cardiac tissue ., The particular values of the fibrosis level and the size of the scar can also depend on anisotropy , the texture of the fibrosis and its possible heterogeneous distribution ., To verify that the dynamical anchoring takes place in a more realistic geometry , we developed and investigated this effect in a patient-specific model of the human left ventricle , see the Method section for details ., The scar in this dataset has a complex geometry with several compact regions with size around 5-7 cm in which the percentage of fibrosis changes gradually from 0% to 41% at the core of the scar based on the imaging data , see Methods section ., The remodeling of ionic channels at the whole scar region was also included to the model ( including borderzone as described the Fibrosis Model in the method section ) ., We studied the phenomenon of dynamical anchoring for 16 different locations of cores of the rotor randomly distributed in a slice of the heart at about 4 cm from the apex ( see Fig 5 ) ., Cardiac anisotropy was generated by a rule-based approach described in details in the Methods section ( Model of the Human Left Ventricle ) ., Of the 16 initial locations , shown in Fig 5 , there was dynamical anchoring to the fibrotic tissue in all cases , with and without ionic remodeling ., After the anchoring , in 4 cases the rotor annihilated ., The effect of the attraction was augmented by the electrophysiolical remodelling , similar as in 2D ., A representative example of our 3D simulations is shown in Fig 5 . We followed the same protocol as for the 2D simulations ., The top 2 rows the modified anterior view and the modified posterior view in the case the scar was present ., In column A , we see the original location of the spiral core ( 5 cm from the scar ) indicated with the black arrow in anterior view ., In column B , breaks are formed due to the scar tissue , and the secondary source started to appear ., After 3 . 7 s , the spiral is anchored around the scar , indicated with the black arrow in the posterior view , and persistently rotated around it ., In the bottom row , we show the same simulation but the scar was not taken into account ., In this case , the spiral does not change its original location ( only a slight movement , see the black arrows ) ., To evaluate if this effect can potentially be registered in clinical practice we computed the ECG for our 3D simulations ., The ECG that corresponds to the example in Fig 5 is shown in Fig 6 . During the first three seconds , the ECG shows QRS complexes varying in amplitude and shape and then more uniform beat-to-beat QRS morphology with a larger amplitude ., This change in morphology is associated with anchoring of the rotor which occurs around three seconds after the start of the simulation ., The initial irregularity is due to the presence of the secondary sources that have a slightly higher period than the original rotor ., After the rotor is anchored , the pattern becomes relatively stable which corresponds to a regular saw-tooth ECG morphology ., Additional ECGs for the cases of termination of the arrhythmia and anchoring are shown in supplementary S2 Fig . For the anchoring dynamics we see similar changes in the ECG morphology as in Fig 6 . The dynamical anchoring is accompanied by an increase of the cycle length ( 247 \u00b1 16 ms versus 295 \u00b1 30 ms ) ., The reason for this effect is that the rotation of the rotor around an obstacle \u2013anatomical reentry\u2013 is usually slower than the rotation of the rotor around its own tip\u2014functional reentry , which is typically at the limit of cycle length permitted by the ERP ., In the previous section , we showed that the described results on dynamical anchoring in an anatomical model of the LV of patients with post infarct scars correspond to the observations on ECGs during initiation of a ventricular arrhythmia ., After initiation , in 18 out of 30 patients ( 60% ) a time dependent change of QRS morphology was observed ., Precordial ECG leads V2 , V3 and V4 from two patients are depicted in Fig 7 . For both patients the QRS morphology following the extra stimuli gradually changed , but the degree of changes here was different ., In patient A , this morphological change is small and both parts of the ECG may be interpreted as a transition from one to another monomorhpic ventricular tachycardia ( MVT ) morphology ., However , for patient B the transition from polymorphic ventricular tachycardia ( PVT ) to MVT is more apparent ., In the other 16 cases we observed different variations between the 2 cases presented in Fig 7 . Supplementary S3 Fig shows examples of ECGs of 4 other patients ., Here , in patients 1 and 2 , we see substantial variations in the QRS complexes after the arrhythmia initiation and subsequently a transformation to MVT ., The recording in patient 3 is less polymorphic and in patient 4 we observe an apparent shift of the ECG from one morphology to another ., It may occur , for example , if due to underlying tissue heterogeneity additional sources of excitation are formed by the initial source ., Overall , the morphology with clear change from PVT to MVT was observed in 5\/18 or 29% of the cases ., These different degrees of variation in QRS morphology may be due to many reasons , namely the proximity of the created source of arrhythmia to the anchoring region , the underlying degree of heterogeneity and fibrosis at the place of rotor initiation , complex shape of scar , etc ., Although this finding is not a proof , it supports that the anchoring phenomenon may occur in clinical settings and serve as a possible mechanism of fast VT induced by programmed stimulation ., In this study , we investigated the dynamics of arrhythmia sources \u2013rotors\u2013 in the presence of fibrotic regions using mathematical modeling ., We showed that fibrotic scars not only anchor but also induce secondary sources and dynamical competition of these sources normally results their annihilation ., As a result , if one just compares the initial excitation pattern in Fig 1A and final excitation pattern in Fig 1L , it may appear as if a distant spiral wave was attracted and anchored to the scar ., However , this is not the case and the anchored spiral here is a result of normal anchoring and competition of secondary sources which we call dynamical anchoring ., This process is different from the usual drift or meandering of rotors where the rotor gradually changes its spatial position ., In dynamical anchoring , the break formation happens in the fibrotic scar region , then it spreads to the original rotor and merges with this rotor tip and reorganizes the excitation pattern ., This process repeats itself until a rotor is anchored around the fibrotic scar region ., Dynamical anchoring may explain the organization from fast polymorphic to monomorphic VT , also accompanied by prolongation in CL , observed in some patients during re-induction after radio frequency catheter ablation of post-infarct scar related VT ., In our simulations the dynamics of rotors in 2D tissue were stable and for given parameter values they do not drift or meander ., This type of dynamics was frequently observed in cardiac monolayers 38 , 39 which can be considered as a simplified experimental model for cardiac tissue ., We expect that more complex rotor dynamics would not affect our main 2D results , as drift or meandering will potentate the disappearance of the initial rotor and thus promote anchoring of the secondary wavebreaks ., In our 3D simulations in an anatomical model of the heart , the dynamics of rotors is not stationary and shows the ECG of a polymorphic VT ( Fig 6 ) ., The dynamical anchoring combines several processes: generation of new breaks at the scar , spread of breaks toward the original rotor , rotor disappearance and anchoring or one of the wavebreaks at the scar ., The mechanisms of the formation of new wavebreaks at the scar has been studied in several papers 15 , 37 , 40 and can occur due to ionic heterogeneity in the scar region or due to electrotonic effects 40 ., However the process of spread of breaks toward the original rotors is a new type of dynamics and the mechanism of this phenomenon remains to be studied ., To some extent it is similar to the global alternans instability reported in Vandersickel et al . 41 ., Indeed in Vandersickel et al . 41 it was shown that an area of 1:2 propagation block can extend itself towards the original spiral wave and is related to the restitution properties of cardiac tissue ., Although in our case we do not have a clear 1:2 block , wave propagation in the presence of breaks is disturbed resulting in spatially heterogeneous change of diastolic interval which via the restitution effects can result in breakup extension ., This phenomenon needs to be further studied as it may provide new ways for controlling rotor anchoring processes and therefore can affect the dynamics of a cardiac arrhythmia ., In this paper , we used the standard method of representing fibrosis by placement of electrically uncoupled unexcitable nodes with no-flux boundary conditions ., Although such representation is a simplification based on the absence of detailed 3D data , it does reproduce the main physiological effects observed in fibrotic tissue , such as formation of wavebreaks , fractionated electrograms , etc 22 ., The dynamical anchoring reported in this paper occurs as a result of the restructuring of the activation pattern and relies only on these basic properties of the fibrotic scar , i . e . the ability to generate wavebreaks and the ability to anchor rotors , which is reproduced by this representation ., In addition , for each data point , we performed simulations with at least 60 different textures ., Therefore , we expect that the effect observed in our paper is general and should exist for any possible representation of the fibrosis ., The specific conditions , e . g . the size and degree of fibrosis necessary for dynamical anchoring may depend on the detailed fibrosis structure and it would be useful to perform simulations with detailed experimentally based 3D structures of the fibrotic scars , when they become available ., Similar processes can not only occur at fibrotic scars , but also at ionic heterogeneities ., In Defauw et al . 42 , it has been shown that rotors can be attracted by ionic heterogeneities of realistic size and shape , similar to those measured in the ventricles of the human heart 43 ., These ionic heterogeneities had a prolonged APD and also caused wavebreaks , creating a similar dynamical process as described in Fig 1 ., In this study however , we demonstrated that structural heterogeneity is sufficient to trigger this type of dynamical anchoring ., It is important to note that in this study fibrosis was modeled as regions with many small inexcitable obstacles ., However , the outcome can depend on how the cellular electrophysiology and regions of fibrosis have been represented ., In modeling studies , regions of fibrosis can also be represented ","headings":"Introduction, Materials and methods, Results, Discussion","abstract":"Rotors are functional reentry sources identified in clinically relevant cardiac arrhythmias , such as ventricular and atrial fibrillation ., Ablation targeting rotor sites has resulted in arrhythmia termination ., Recent clinical , experimental and modelling studies demonstrate that rotors are often anchored around fibrotic scars or regions with increased fibrosis ., However , the mechanisms leading to abundance of rotors at these locations are not clear ., The current study explores the hypothesis whether fibrotic scars just serve as anchoring sites for the rotors or whether there are other active processes which drive the rotors to these fibrotic regions ., Rotors were induced at different distances from fibrotic scars of various sizes and degree of fibrosis ., Simulations were performed in a 2D model of human ventricular tissue and in a patient-specific model of the left ventricle of a patient with remote myocardial infarction ., In both the 2D and the patient-specific model we found that without fibrotic scars , the rotors were stable at the site of their initiation ., However , in the presence of a scar , rotors were eventually dynamically anchored from large distances by the fibrotic scar via a process of dynamical reorganization of the excitation pattern ., This process coalesces with a change from polymorphic to monomorphic ventricular tachycardia .","summary":"Rotors are waves of cardiac excitation like a tornado causing cardiac arrhythmia ., Recent research shows that they are found in ventricular and atrial fibrillation ., Burning ( via ablation ) the site of a rotor can result in the termination of the arrhythmia ., Recent studies showed that rotors are often anchored to regions surrounding scar tissue , where part of the tissue still survived called fibrotic tissue ., However , it is unclear why these rotors anchor to these locations ., Therefore , in this work , we investigated why rotors are so abundant in fibrotic tissue with the help of computer simulations ., We performed simulations in a 2D model of human ventricular tissue and in a patient-specific model of a patient with an infarction ., We found that even when rotors are initially at large distances from the fibrotic region , they are attracted by this region , to finally end up at the fibrotic tissue ., We called this process dynamical anchoring and explained how the process works .","keywords":"dermatology, medicine and health sciences, diagnostic radiology, engineering and technology, cardiovascular anatomy, cardiac ventricles, fibrosis, magnetic resonance imaging, developmental biology, electrocardiography, bioassays and physiological analysis, cardiology, research and analysis methods, scars, arrhythmia, imaging techniques, atrial fibrillation, electrophysiological techniques, rotors, mechanical engineering, radiology and imaging, diagnostic medicine, cardiac electrophysiology, anatomy, biology and life sciences, heart","toc":null} +{"Unnamed: 0":2233,"id":"journal.pcbi.1002283","year":2011,"title":"Chemotaxis when Bacteria Remember: Drift versus Diffusion","sections":"The bacterium E . coli moves by switching between two types of motions , termed \u2018run\u2019 and \u2018tumble\u2019 1 ., Each results from a distinct movement of the flagella ., During a run , flagella motors rotate counter-clockwise ( when looking at the bacteria from the back ) , inducing an almost constant forward velocity of about , along a near-straight line ., In an environment with uniform nutrient concentration , run durations are distributed exponentially with a mean value of about 2 ., When motors turn clockwise , the bacterium undergoes a tumble , during which , to a good approximation , it does not translate but instead changes its direction randomly ., In a uniform nutrient-concentration profile , the tumble duration is also distributed exponentially but with a much shorter mean value of about 3 ., When the nutrient ( or , more generally , chemoattractant ) concentration varies in space , bacteria tend to accumulate in regions of high concentration ( or , equivalently , the bacteria can also be repelled by chemorepellants and tend to accumulate in low chemical concentration ) 4 ., This is achieved through a modulation of the run durations ., The biochemical pathway that controls flagella dynamics is well understood 1 , 5\u20137 and the stochastic \u2018algorithm\u2019 which governs the behavior of a single motor is experimentally measured ., The latter is routinely used as a model for the motion of a bacteria with many motors 1 , 8\u201311 ., This algorithm represents the motion of the bacterium as a non-Markovian random walker whose stochastic run durations are modulated via a memory kernel , shown in Fig . 1 ., Loosely speaking , the kernel compares the nutrient concentration experienced in the recent past with that experienced in the more distant past ., If the difference is positive , the run duration is extended; if it is negative , the run duration is shortened ., In a complex medium bacterial navigation involves further complications; for example , interactions among the bacteria , and degradations or other dynamical variations in the chemical environment ., These often give rise to interesting collective behavior such as pattern formation 12 , 13 ., However , in an attempt to understand collective behavior , it is imperative to first have at hand a clear picture of the behavior of a single bacterium in an inhomogeneous chemical environment ., We are concerned with this narrower question in the present work ., Recent theoretical studies of single-bacterium behavior have shown that a simple connection between the stochastic algorithm of motion and the average chemotactic response is far from obvious 8\u201311 ., In particular , it appeared that favorable chemotactic drift could not be reconciled with favorable accumulation at long times , and chemotaxis was viewed as resulting from a compromise between the two 11 ., The optimal nature of this compromise in bacterial chemotaxis was examined in Ref ., 10 ., In various approximations , while the negative part of the response kernel was key to favorable accumulation in the steady state , it suppressed the drift velocity ., Conversely , the positive part of the response kernel enhanced the drift velocity but reduced the magnitude of the chemotactic response in the steady state ., Here , we carry out a detailed study of the chemotactic behavior of a single bacterium in one dimension ., We find that , for an \u2018adaptive\u2019 response kernel ( i . e . , when the positive and negative parts of the response kernel have equal weight such that the total area under the curve vanishes ) , there is no incompatibility between a strong steady-state chemotaxis and a large drift velocity ., A strong steady-state chemotaxis occurs when the positive peak of the response kernel occurs at a time much smaller than and the negative peak at a time much larger than , in line with experimental observation ., Moreover , we obtain that the drift velocity is also large in this case ., For a general \u2018non-adaptive\u2019 response kernel ( i . e . , when the area under the response kernel curve is non-vanishing ) , however , we find that a large drift velocity indeed opposes chemotaxis ., Our calculations show that , in this case , a position-dependent diffusivity is responsible for chemotactic accumulation ., In order to explain our numerical results , we propose a simple coarse-grained model which describes the bacterium as a biased random walker with a drift velocity and diffusivity , both of which are , in general , position-dependent ., This simple model yields good agreement with results of detailed simulations ., We emphasize that our model is distinct from existing coarse-grained descriptions of E . coli chemotaxis 13\u201316 ., In these , coarse-graining was performed over left- and right-moving bacteria separately , after which the two resulting coarse-grained quantities were then added to obtain an equation for the total coarse-grained density ., We point out why such approaches can fail and discuss the differences between earlier models and the present coarse-grained model ., Following earlier studies of chemotaxis 9 , 17 , we model the navigational behavior of a bacterium by a stochastic law of motion with Poissonian run durations ., A switch from run to tumble occurs during the small time interval between and with a probability ( 1 ) Here , and is a functional of the chemical concentration , , experienced by the bacterium at times ., In shallow nutrient gradients , the functional can be written as ( 2 ) The response kernel , , encodes the action of the biochemical machinery that processes input signals from the environment ., Measurements of the change in the rotational bias of a flagellar motor in wild-type bacteria , in response to instantaneous chemoattractant pulses were reported in Refs ., 17 , 18; experiments were carried out with a tethering assay ., The response kernel obtained from these measurements has a bimodal shape , with a positive peak around and a negative peak around ( see Fig . 1 ) ., The negative lobe is shallower than the positive one and extends up to , beyond which it vanishes ., The total area under the response curve is close to zero ., As in other studies of E . coli chemotaxis , we take this response kernel to describe the modulation of run duration of swimming bacteria 8\u201311 ., Recent experiments suggest that tumble durations are not modulated by the chemical environment and that as long as tumbles last long enough to allow for the reorientation of the cell , bacteria can perform chemotaxis successfully 19 , 20 ., The model defined by Eqs ., 1 and 2 is linear ., Early experiments pointed to a non-linear , in effect a threshold-linear , behavior of a bacterium in response to chemotactic inputs 17 , 18 ., In these studies , a bacterium modulated its motion in response to a positive chemoattractant gradient , but not to a negative one ., In the language of present model , such a threshold-linear response entails replacing the functional defined in Eq ., 2 by zero whenever the integral is negative ., More recent experiments suggest a different picture , in which a non-linear response is expected only for a strong input signal whereas the response to weak chemoattractant gradient is well described by a linear relation 21 ., Here , we present an analysis of the linear model ., For the sake of completeness , in Text S1 , we present a discussion of models which include tumble modulations and a non-linear response kernel ., Although recent experiments have ruled out the existence of both these effects in E . coli chemotaxis , in general such effects can be relevant to other systems with similar forms of the response function ., The shape of the response function hints to a simple mechanism for the bacterium to reach regions with high nutrient concentration ., The bilobe kernel measures a temporal gradient of the nutrient concentration ., According to Eq ., 1 , if the gradient is positive , runs are extended; if it is negative , runs are unmodulated ., However , recent literature 8 , 9 , 11 has pointed out that the connection between this simple picture and a detailed quantitative analysis is tenuous ., For example , de Gennes used Eqs ., 1 to calculate the chemotactic drift velocity of bacteria 8 ., He found that a singular kernel , , where is a Dirac function and a positive constant , lead to a mean velocity in the direction of increasing nutrient concentration even when bacteria are memoryless ( ) ., Moreover , any addition of a negative contribution to the response kernel , as seen in experiments ( see Fig . 1 ) , lowered the drift velocity ., Other studies considered the steady-state density profile of bacteria in a container with closed walls , both in an approximation in which correlations between run durations and probability density were ignored 11 and in an approximation in which the memory of the bacterium was reset at run-to-tumble switches 9 ., Both these studies found that , in the steady state , a negative contribution to the response function was mandatory for bacteria to accumulate in regions of high nutrient concentration ., These results seem to imply that the joint requirement of favorable transient drift and steady-state accumulation is problematic ., The paradox was further complicated by the observation 9 that the steady-state single-bacterium probability density was sensitive to the precise shape of the kernel: when the negative part of the kernel was located far beyond it had little influence on the steady-state distribution 11 ., In fact , for kernels similar to the experimental one , model bacteria accumulated in regions with low nutrient concentration in the steady state 9 ., In order to resolve these paradoxes and to better understand the mechanism that leads to favorable accumulation of bacteria , we perform careful numerical studies of bacterial motion in one dimension ., In conformity with experimental observations 17 , 18 , we do not make any assumption of memory reset at run-to-tumble switches ., We model a bacterium as a one-dimensional non-Markovian random walker ., The walker can move either to the left or to the right with a fixed speed , , or it can tumble at a given position before initiating a new run ., In the main paper , we present results only for the case of instantaneous tumbling with , while results for non-vanishing are discussed in Text S1 ., There , we verify that for an adaptive response kernel does not have any effect on the steady-state density profile ., For a non-adaptive response kernel , the correction in the steady-state slope due to finite is small and proportional to ., The run durations are Poissonian and the tumble probability is given by Eq ., 1 ., The probability to change the run direction after a tumble is assumed to have a fixed value , , which we treat as a parameter ., The specific choice of the value of does not affect our broad conclusions ., We find that , as long as , only certain detailed quantitative aspects of our numerical results depend on ., ( See Text S1 for details on this point . ), We assume that bacteria are in a box of size with reflecting walls and that they do not interact among each other ., We focus on the steady-state behavior of a population ., Reflecting boundary conditions are a simplification of the actual behavior 22 , 23; as long as the total \u2018probability current\u2019 ( see discussion below ) in the steady state vanishes , our results remain valid even if the walls are not reflecting ., As a way to probe chemotactic accumulation , we consider a linear concentration profile of nutrient: ., We work in a weak gradient limit , i . e . , the value of is chosen to be sufficiently small to allow for a linear response ., Throughout , we use in our numerics ., From the linearity of the problem , results for a different attractant gradient , , can be obtained from our results through a scaling factor ., In the linear reigme , we obtain a spatially linear steady-state distribution of individual bacterium positions , or , equivalently , a linear density profile of a bacterial population ., Its slope , which we denote by , is a measure of the strength of chemotaxis ., A large slope indicates strong bacterial preference for regions with higher nutrient concentration ., Conversely , a vanishing slope implies that bacteria are insensitive to the gradient of nutrient concentration and are equally likely to be anywhere along the line ., We would like to understand the way in which the slope depends on the different time scales present in the system ., In order to gain insight into our numerical results , we developed a simple coarse-grained model of chemotaxis ., For the sake of simplicity , we first present the model for a non-adaptive , singular response kernel , , and , subsequently , we generalize the model to adaptive response kernels by making use of linear superposition ., The memory trace embodied by the response kernel induces temporal correlations in the trajectory of the bacterium ., However , if we consider the coarse-grained motion of the bacterium over a spatial scale that exceeds the typical run stretch and a temporal scale that exceeds the typical run duration , then we can assume that it behaves as a Markovian random walker with drift velocity and diffusivity ., Since the steady-state probability distribution , , is flat for , for small we can write ( 4 ) ( 5 ) ( 6 ) Here , and ., Since we are neglecting all higher order corrections in , our analysis is valid only when is sufficiently small ., In particular , even when , we assume that the inequality is still satisfied ., The chemotactic drift velocity , , vanishes if ; it is defined as the mean displacement per unit time of a bacterium starting a new run at a given location ., Clearly , even in the steady state when the current , defined through , vanishes , may be non-vanishing ( see Eq . 8 below ) ., In general , the non-Markovian dynamics make dependent on the initial conditions ., However , in the steady state this dependence is lost and can be calculated , for example , by performing a weighted average over the probability of histories of a bacterium ., This is the quantity that is of interest to us ., An earlier calculation by de Gennes showed that , if the memory preceding the last tumble is ignored , then for a linear profile of nutrient concentration the drift velocity is independent of position and takes the form 8 ., While the calculation applies strictly in a regime with ( because of memory erasure ) , in fact its result captures the behavior well over a wide range of parameters ( see Fig . 4 ) ., To measure in our simulations , we compute the average displacement of the bacterium between two successive tumbles in the steady state , and we extract therefrom the drift velocity ., ( For details of the derivation , see Text S1 . ), We find that is negative for and that its magnitude falls off with increasing values of ( Fig . 4 ) ., We also verify that indeed does not show any spatial dependence ( data shown in Fig . of Text S1 ) ., We recall that , in our numerical analysis , we have used a small value of ; this results in a low value of ., We show below that for an experimentally measured bilobe response kernel , obtained by superposition of singular response kernels , the magnitude of becomes larger and comparable with experimental values ., To obtain the diffusivity , , we first calculate the effective mean free path in the coarse-grained model ., The tumbling frequency of a bacterium is and depends on the details of its past trajectory ., In the coarse-grained model , we replace the quantity by an average over all the trajectories within the spatial resolution of the coarse-graining ., Equivalently , in a population of non-interacting bacteria , the average is taken over all the bacteria contained inside a blob , and , hence , denotes the position of the center of mass of the blob at a time in the past ., As mentioned above , the drift velocity is proportional to , so that ., The average tumbling frequency then becomes and , consequently , the mean free path becomes ., As a result , the diffusivity is expressed as ., We checked this form against our numerical results ( Fig . 5 ) ., Having evaluated the drift velocity , , and the diffusivity , , we now proceed to write down the continuity equation ( for a more rigorous but less intuitive approach , see 10 ) ., For a biased random walker on a lattice , with position-dependent hopping rates and towards the right and the left , respectively , one has and , where is the lattice constant ., In the continuum limit , the temporal evolution of the probability density is given by a probability current , as ( 7 ) where the current takes the form ( 8 ) For reflecting boundary condition , in the steady state ., This constraint yields a steady-state slope ( 9 ) for small ., We use our measured values for and ( Figs . 4 and 5 ) , and compute the slope using Eq ., 9 ., ( For details of the measurement of , see Text S1 . ), We compare our analytical and numerical results in Fig . 2 , which exhibits close agreement ., According to Eq ., 9 , steady-state chemotaxis results from a competition between drift motion and diffusion ., For , the drift motion is directed toward regions with a lower nutrient concentration and hence opposes chemotaxis ., Diffusion is spatially dependent and becomes small for large nutrient concentrations ( again for ) , thus increasing the effective residence time of the bacteria in favorable regions ., For large values of , the drift velocity vanishes and one has a strong chemotaxis as increases ( Fig . 2 ) ., Finally , for , the calculation by de Gennes yields which exactly cancels the spatial gradient of ( to linear order in ) , and there is no accumulation 8 , 11 ., These conclusions are easily generalized to adaptive response functions ., For , within the linear response regime , the effective drift velocity and diffusivity can be constructed by simple linear superposition: The drift velocity reads ., Interestingly , the spatial dependence of cancels out and ., The resulting slope then depends on the drift only and is calculated as ( 10 ) In this case , the coarse-grained model is a simple biased random walker with constant diffusivity ., For and , the net velocity , proportional to , is positive and gives rise to a favorable chemotactic response , according to which bacteria accumulate in regions with high food concentration ., Moreover , the slope increases as the separation between and grows ., We emphasize that there is no incompatibility between strong steady-state chemotaxis and large drift velocity ., In fact , in the case of an adaptive response function , strong chemotaxis occurs only when the drift velocity is large ., For a bilobe response kernel , approximated by a superposition of many delta functions ( Fig . 1 ) , the slope , , can be calculated similarly and in Fig . 3 we compare our calculation to the simulation results ., We find close agreement in the case of a linear model with a bilobe response kernel and , in fact , also in the case of a non-linear model ( see Text S1 ) ., The experimental bilobe response kernel is a smooth function , rather than a finite sum of singular kernels over a set of discrete values ( as in Fig . 1 ) ., Formally , we integrate singular kernels over a continuous range of to obtain a smooth response kernel ., If we then integrate the expression for the drift velocity obtained by de Gennes , according to this procedure , we find an overall drift velocity , for the concentration gradient considered ( ) ., By scaling up the concentration gradient by a factor of , the value of can also be scaled up by and can easily account for the experimentally measured velocity range ., We carried out a detailed analysis of steady-state bacterial chemotaxis in one dimension ., The chemotactic performance in the case of a linear concentration profile of the chemoattractant , , was measured as the slope of the bacterium probability density profile in the steady state ., For a singular impulse response kernel , , the slope was a scaling function of , which vanished at the origin , increased monotonically , and saturated at large argument ., To understand these results we proposed a simple coarse-grained model in which bacterial motion was described as a biased random walk with drift velocity , , and diffusivity , ., We found that for small enough values of , was independent of and varied linearly with nutrient concentration ., By contrast , was spatially uniform and its value decreased monotonically with and vanished for ., We presented a simple formula for the steady-state slope in terms of and ., The prediction of our coarse-grained model agreed closely with our numerical results ., Our description is valid when is small enough , and all our results are derived to linear order in ., We assume is always satisfied ., Our results for an impulse response kernel can be easily generalized to the case of response kernels with arbitrary shapes in the linear model ., For an adaptive response kernel , the spatial dependence of the diffusivity , , cancels out but a positive drift velocity , , ensures bacterial accumulation in regions with high nutrient concentration , in the steady state ., In this case , the slope is directly proportional to the drift velocity ., As the delay between the positive and negative peaks of the response kernel grows , the velocity increases , with consequent stronger chemotaxis ., Earlier studies of chemotaxis 13\u201316 put forth a coarse-grained model different from ours ., In the model first proposed by Schnitzer for a single chemotactic bacterium 14 , he argued that , in order to obtain favorable bacterial accumulation , tumbling rate and ballistic speed of a bacterium must both depend on the direction of its motion ., In his case , the continuity equation reads ( 11 ) where is the ballistic speed and is the tumbling frequency of a bacterium moving toward the left ( right ) ., For E . coli , as discussed above , , a constant independent of the location ., In that case , Eq ., 11 predicts that in order to have a chemotactic response in the steady state , one must have a non-vanishing drift velocity , i . e . , ., This contradicts our findings for non-adaptive response kernels , according to which a drift velocity only hinders the chemotactic response ., The spatial variation of the diffusivity , instead , causes the chemotactic accumulation ., This is not captured by Eq ., 11 ., In the case of adaptive response kernels , the diffusivity becomes uniform while the drift velocity is positive , favoring chemotaxis ., Comparing the expression of the flux , , obtained from Eqs ., 7 and 8 with that from Eq ., 11 , and matching the respective coefficients of and , we find and ., As we argued above in discussing the coarse-grained model for adaptive response kernels , both and are spatially independent ., This puts strict restrictions on the spatial dependence of and ., For example , as in E . coli chemotaxis , our coarse-grained description is recovered only if and are also independent of ., We comment on a possible origin of the discrepancy between our work and earlier treatments ., In Ref ., 14 , a continuity equation was derived for the coarse-grained probability density of a bacterium , starting from a pair of approximate master equations for the probability density of a right-mover and a left-mover , respectively ., As the original process is non-Markovian , one can expect a master equation approach to be valid only at scales that exceed the scale over which spatiotemporal correlations in the behavior of the bacterium are significant ., In particular , a biased diffusion model can be viewed as legitimate only if the ( coarse-grained ) temporal resolution allows for multiple runs and tumbles ., If so , at the resolution of the coarse-grained model , left- and right-movers become entangled , and it is not possible to perform a coarse-graining procedure on the two species separately ., Thus one cannot define probability densities for a left- and a right-mover that evolves in a Markovian fashion ., In our case , left- and right-movers are coarse-grained simultaneously , and the total probability density is Markovian ., Thus , our diffusion model differs from that of Ref ., 14 because it results from a different coarse-graining procedure ., The model proposed in Ref ., 14 has been used extensively to investigate collective behaviors of E . coli bacteria such as pattern formation 13 , 15 , 16 ., It would be worth asking whether the new coarse-grained description can shed new light on bacterial collective behavior .","headings":"Introduction, Models, Results, Discussion","abstract":"Escherichia coli ( E . coli ) bacteria govern their trajectories by switching between running and tumbling modes as a function of the nutrient concentration they experienced in the past ., At short time one observes a drift of the bacterial population , while at long time one observes accumulation in high-nutrient regions ., Recent work has viewed chemotaxis as a compromise between drift toward favorable regions and accumulation in favorable regions ., A number of earlier studies assume that a bacterium resets its memory at tumbles \u2013 a fact not borne out by experiment \u2013 and make use of approximate coarse-grained descriptions ., Here , we revisit the problem of chemotaxis without resorting to any memory resets ., We find that when bacteria respond to the environment in a non-adaptive manner , chemotaxis is generally dominated by diffusion , whereas when bacteria respond in an adaptive manner , chemotaxis is dominated by a bias in the motion ., In the adaptive case , favorable drift occurs together with favorable accumulation ., We derive our results from detailed simulations and a variety of analytical arguments ., In particular , we introduce a new coarse-grained description of chemotaxis as biased diffusion , and we discuss the way it departs from older coarse-grained descriptions .","summary":"The chemotaxis of Escherichia coli is a prototypical model of navigational strategy ., The bacterium maneuvers by switching between near-straight motion , termed runs , and tumbles which reorient its direction ., To reach regions of high nutrient concentration , the run-durations are modulated according to the nutrient concentration experienced in recent past ., This navigational strategy is quite general , in that the mathematical description of these modulations also accounts for the active motility of C . elegans and for thermotaxis in Escherichia coli ., Recent studies have pointed to a possible incompatibility between reaching regions of high nutrient concentration quickly and staying there at long times ., We use numerical investigations and analytical arguments to reexamine navigational strategy in bacteria ., We show that , by accounting properly for the full memory of the bacterium , this paradox is resolved ., Our work clarifies the mechanism that underlies chemotaxis and indicates that chemotactic navigation in wild-type bacteria is controlled by drift while in some mutant bacteria it is controlled by a modulation of the diffusion ., We also propose a new set of effective , large-scale equations which describe bacterial chemotactic navigation ., Our description is significantly different from previous ones , as it results from a conceptually different coarse-graining procedure .","keywords":"physics, statistical mechanics, theoretical biology, biophysics theory, biology, computational biology, biophysics simulations, biophysics","toc":null} +{"Unnamed: 0":4,"id":"journal.pcbi.1005644","year":2017,"title":"A phase transition induces chaos in a predator-prey ecosystem with a dynamic fitness landscape","sections":"In many natural ecosystems , at least one constituent species evolves quickly enough relative to its population growth that the two effects become interdependent ., This phenomenon can occur when selection forces are tied to such sudden environmental effects as algal blooms or flooding 1 , or it can arise from more subtle , population-level effects such as overcrowding or resource depletion 2 ., Analysis of such interactions within a unified theory of \u201ceco-evolutionary dynamics\u201d has been applied to a wide range of systems\u2014from bacteria-phage interactions to bighorn sheep 3\u2014by describing population fluctuations in terms of the feedback between demographic change and natural selection 4 ., The resulting theoretical models relate the fitness landscape ( or fitness function ) to population-level observables such as the population growth rate and the mean value of an adapting phenotypic trait ( such as horn length , cell wall thickness , etc ) ., The fitness landscape may have an arbitrarily complex topology , as it can depend on myriad factors ranging from environmental variability 5 , 6 , to inter- and intraspecific competition 7 , 8 , to resource depletion 9 ., However , these complex landscapes can be broadly classified according to whether they result in stabilizing or disruptive selection ., In the former , the landscape may possess a single , global maximum that causes the population of individuals to evolve towards a state in which most individuals have trait values at or near this maximum 10 ., Conversely , in disruptive selection , the fitness landscape may contain multiple local maxima , in which case the population could have a wide distribution of trait values and occupy multiple distinct niches 11 ., In eco-evolutionary models , the shape of the fitness landscape may itself depend on the population densities of the interacting species it describes ., Specifically , the concept that the presence of competition can lead a single-peaked fitness landscape to spontaneously develop additional peaks originates in the context of \u201ccompetitive speciation\u201d first proposed by Rosenzweig 12 ., This is formalized in genetic models in which sympatric speciation is driven by competitive pressures rather than geographic isolation 13 ., Competition-induced disruptive selection has been observed in natural populations of stickleback fish 14 , microbial communities 15 , and fruit flies 16 , 17 ., Here , we model eco-evolutionary dynamics of a predator-prey system based on first-order \u201cgradient dynamics\u201d 10 , 18 , a class of models that explicitly define the fitness in terms of the population growth rate r , which is taken to depend only on the mean value of the trait across the entire population , c \u00af 19 ., Despite this simplification , gradient dynamics models display rich behavior that can account for a wide range of effects observed in experimental systems\u2014in particular , recent work by Cortez and colleagues has shown that these models can result in irregular cycles and dynamical bifurcations that depend on the standing genetic variation present in a population 20 , 21 ., In our model , gradient dynamics cause the prey fitness landscape to change as a result of predation , and we find that the resulting dynamical system exhibits chaotic dynamics ., Chaos is only possible in systems in which three or more dependent dynamical variables vary in time 22 , and previously it has been observed in predator-prey systems comprising three or more mutually interdependent species , or in which an external environmental variable ( such as seasonal variation or generic noise ) is included in the dynamics 23 , 24 ., Here we show that evolution of just one species in a two-species ecosystem is sufficient to drive the ecosystem into chaos ., Moreover , we find that chaos is driven by a density-dependent change of the fitness landscape from a stabilizing to disruptive state , and that this transition has hysteretic behavior with mathematical properties that are strongly reminiscent of a first-order phase transition in a thermodynamical system ., The resulting dynamics display intermittent properties typically associated with ecosystems poised at the \u201cedge of chaos , \u201d which we suggest has implications for the study of ecological stability and speciation ., Adapting the notation and formulation used by Cortez ( 2016 ) 21 , we use a two-species competition model with an additional dynamical variable introduced to account for a prey trait on which natural selection may act ., The most general fitness function for the prey , r , accounts for density-dependent selection on a prey trait c ,, r ( x , y , c \u00af , c ) \u2261 G ( x , c , c \u00af ) - D ( c , c \u00af ) - f ( x , y ) , ( 1 ), where x = x ( t ) is the time-dependent prey density , y = y ( t ) is the time-dependent predator density , c is a trait value for an individual in the prey population , and c \u00af = c \u00af ( t ) is the mean value of the trait across the entire prey population at time t ., r comprises a density-dependent birth rate G , a density-independent death rate D , and a predator-prey interaction term f , which for simplicity is assumed to depend on neither c nor c \u00af ., Thus the trait under selection in our model is not an explicit predator avoidance trait such as camouflage , but rather an endogenous advancement ( i . e . , improved fecundity , faster development , or reduced mortality ) that affects the prey\u2019s ability to exploit resources in its environment , even in the absence of predation ., The continuous-time \u201cgradient dynamics\u201d model that we study interprets the fitness r as the growth rate of the prey: 19 , 25, x \u02d9 = x r ( x , y , c \u00af , c ) | c \u2192 c \u00af ( 2 ), y \u02d9 = y ( f ( x , y ) - D \u02dc ( y ) ) ( 3 ), c \u00af \u02d9 = V \u2202 r ( x , y , c \u00af , c ) \u2202 c | c \u2192 c \u00af ., ( 4 ) Eq ( 2 ) is evaluated with all individual trait values c set to the mean value c \u00af because the total prey population density is assumed to change based on the fitness function , which in turn depends on the population-averaged value of the prey trait c \u00af 21 ., The timescale of the dynamics in c \u00af are set by V , which is interpreted as the additive genetic variance of the trait 10 ., While Eq ( 2 ) depends only on the mean trait value c \u00af , the full distribution of individual trait values c present in a real-world population may change over time as the relative frequencies of various phenotypes change ., In principle , additional differential equations of the form of Eq ( 4 ) could be added to account for higher moments of the distribution of c across an ensemble of individuals , allowing the gradient dynamics model to be straightforwardly extended to model a trait\u2019s full distribution rather than just the population mean ., However , here we focus on the case where the prey density dynamics x \u02d9 depend only on the mean trait value to first order , and we do not include differential equations for higher-order moments of the prey trait value distribution ., The use of a single Eq ( 4 ) to describe the full dynamics of the trait distribution represents an approximation that is exact only when the phenotypic trait distribution stays nearly symmetric and the prey population maintains a constant standing genetic variation V 10 ., However , V may remain fixed even if the phenotypic variance changes , a property that is observed phenomenologically in experimental systems , and which may be explained by time-dependent heritability , breeding effects , mutation , or other transmission effects not explicitly modeled here 26\u201329 ., More broadly , this assumption may imply that gene selection is weak compared to phenotype selection 30 , 31 ., S1D Appendix further describes the circumstances under which V remains fixed , and also provides a first-order estimate of the magnitude of error introduced by ignoring higher-order effects ( such as skewness ) in the trait distribution ., The results suggest that these effects are small for the parameter values ( and resulting range of x and y values ) used here , due in part to limitations on the maximum skewness that a hypothetical trait distribution can achieve on the fitness landscapes studied here ., In S1D Appendix , we also compare the results presented below to an equivalent model in which a full trait distribution is present , in which case Eq ( 2 ) becomes a full integro-differential equation involving averages of the trait value over the entire prey population ., Detailed numerical study of this integro-differential equation is computationally prohibitive for the long timescales studied here , but direct comparison of the contributions of various terms in the velocity field suggests general accuracy of the gradient dynamics model for the fitness landscapes and conditions we study here ., However , in general the appropriateness of the gradient dynamics model should be checked whenever using Eq ( 4 ) with an arbitrary fitness function ., Fig 1A shows a schematic summarizing the gradient dynamics model , and noting the primary assumptions underlying this formulation ., Next , we choose functional forms for f , G , D , and D \u02dc in Eqs ( 2 ) and ( 3 ) ., We start with the assumption that , for fixed values of the trait c an d its mean c \u00af , the population dynamics should have the form of a typical predator-prey system in the absence of evolutionary effects ., Because the predator dynamics are not directly affected by evolutionary dynamics , we choose a simple form for predator growth consisting of a fixed death rate and a standard Holling Type II birth rate , 32, f ( x , y ) = a 2 x y 1 + b 2 x ( 5 ), D \u02dc ( y ) = d 2 ( 6 ), The predator birth rate f saturates at large values of the prey density , which is more realistic than the standard Lotka-Volterra competition term xy in cases where the prey density is large or fluctuating 22 ., A saturating interaction term ensures that solutions of the system remain bounded for a wider range of parameter values , a necessity for realistic models of long-term interactions 33 ., For the prey net growth rate ( Eq ( 1 ) , the fitness ) in the absence of the predator , we use the following functional forms ,, G ( x , c \u00af , c ) = a 1 c \u00af 1 + b 1 c \u00af ( 1 - k 1 x ( c - c \u00af ) ) ( 7 ), D ( c , c \u00af ) = d 1 ( 1 - k 2 ( c 2 - c \u00af 2 ) + k 4 ( c 4 - c \u00af 4 ) ) ., ( 8 ), The first term in Eq ( 7 ) specifies that the prey population density growth rate r | c \u2192 c \u00af depends only on a primary saturating contribution of the mean trait to the birth rate G . In other models a similar effect is achieved by modifying the mean trait evolution Eq ( 4 ) , such that extremal values of the trait are disadvantaged 21; alternative coupling methods based on exponential saturation would be expected to yield similar qualitative results 19 ., However , the additional series terms in Eqs ( 7 ) and ( 8 ) ensure that the any individual\u2019s fitness r may differ from the rest of the population depending on the difference between its trait value c and the population mean c \u00af ., Because the functional form of this difference is unknown , its contribution expressed as second-order truncation of the series difference of the form r ( c , c \u00af ) = r \u02dc | c \u2192 0 + ( r \u02dc ( c ) - r \u02dc ( c ) | c \u2192 c \u00af ) ( where r \u02dc represents an unscaled fitness function ) ., This ensures that when c \u02d9 = 0 or c = c \u00af , the system reduces to a standard prey model with a Holling Type II increase in birth rate in response to increasing mean trait value 25 ., In the results reported below , we observe that all dynamical variables remain bounded as long as parameter values are chosen such that the predator density does not equilibrate to zero ., This is a direct consequence of our use of saturating Holling Type II functional forms in Eqs ( 7 ) and ( 8 ) , which prevent the fitness landscape from increasing without bound at large c , c \u00af and also ensure that the predator and prey densities do not jointly diverge ., That the dynamics should stay bounded due to saturating terms is justified by empirical studies of predator-prey systems 34 , 35; moreover , other saturating functional forms are expected to yield similar results if equivalent parameter values are chosen 33 , 36 ., The nonlinear dependence of the mortality rate Eq ( 8 ) on the trait is based on mechanistic models of mortality with individual variation 19 , 37 , 38 ., The specific choice of a quartic in Eq ( 8 ) allows the fitness function r to have a varying number of real roots and local maxima in the domain c , c \u00af > 0 , affording the system dynamical freedom not typically possible in predator prey models with constant or linear prey fitness\u2014in particular , for different values of k2 , k4 the fitness landscape can have a single optimal phenotype , multiple optimal phenotypes , or no optimal intermediate values ., Because any even , continuous form for the fitness landscape can be approximated using a finite number of terms in its Taylor series around c = 0 , our choice of a quartic form simply constitutes truncation of this expansion at the quartic order in order to include the simplest case in which the fitness function admits multiple local maxima\u2014for this reason , a quartic will always represent the leading-order series expansion of a fitness landscape with multiple local maxima ., Below , we observe numerically that \u2223 c - c \u00af \u2223 < 1 , ex post facto justifying truncation of the higher order terms in this series expansion ., However , if the trait value c was strictly bounded to only take non-zero values on a finite interval ( as opposed to the entire real line ) , then a second-order , quadratic fitness landscape would be sufficient to admit multiple local maxima ( at the edges of the interval ) 14 ., However , the choice here of an unbounded trait value c avoids creating boundary effects , and it has little consequence due to the steep decay of the quartic function at large values of |c| , which effectively confines the possible values of c \u00af accessible by the system ., In physics , similar reasons\u2014unbounded domains , multiple local optima , and continuity\u2014typically justify the use of quartic free energy functions in minimal models of systems exhibiting multiple energetic optima , such as the Ginzberg-Landau free energy used in models of superconducting phase transitions 39 ., We note that the birth rate Eq ( 7 ) contributes a density-dependent term to the fitness function even in the absence of predation ( y = 0 ) 21 ., Unlike the death rate function , the effect of the individual trait value on this term is directional: the sign of c - c \u00af determines whether birth rates increase or decrease ., As the population density x increases , the effect of these directional effects is amplified , consistent with the observed effect of intraspecific competition and crowding in experimental studies of evolution 40 , 41 ., The chaotic dynamics reported below arise from this density-dependent term because the term prevents the Jacobian of the system ( 2 ) , ( 3 ) and ( 4 ) from having a row and column with all zeros away from the diagonal; in this case , the prey trait ( and thus evolutionary dynamics ) would be uncoupled from the rest of the system , and would thus relax to a stable equilibrium ( as is necessary for a first-order single-variable equation ) ., In that case , c \u00af would essentially remain fixed and the predator-prey dynamics would become two-dimensional in x and y , precluding chaos ., For similar reasons , density-dependent selection has been found to be necessary for chaos in some discrete-time evolutionary models , for which chaotic dynamics require a certain minimum degree of association between the fitness and the trait frequencies 42 ., Inserting Eqs ( 5 ) , ( 7 ) and ( 8 ) , into Eq ( 1 ) results in a final fitness function of the form, r ( x , y , c \u00af , c ) = a 1 c \u00af 1 + b 1 c \u00af ( 1 - k 1 x ( c - c \u00af ) ) - d 1 ( 1 - k 2 ( c 2 - c \u00af 2 ) + k 4 ( c 4 - c \u00af 4 ) ) - a 2 x y 1 + b 2 x ., ( 9 ), This fitness landscape is shown in Fig 1B , for typical parameter values and predator and prey densities used in the numerical results below ., Depending on the current predator and prey densities , the local maximum of the system can appear in two different locations , which directly affects the dynamics described in the next section ., Inserting Eq ( 9 ) into Eqs ( 2 ) , ( 3 ) and ( 4 ) results in a final form for the dynamical equations ,, x \u02d9 = x ( a 1 c \u00af 1 + b 1 c \u00af - a 2 y 1 + b 2 x - d 1 ) ( 10 ), y \u02d9 = y ( y a a 2 x 1 + b 2 x - d 2 ) ( 11 ), c \u00af \u02d9 = c \u00af V ( ( 2 k 2 d 1 ) - ( 4 k 4 d 1 ) c \u00af 2 - ( a 1 k 1 ) x 1 + b 1 c \u00af ) ., ( 12 ), Due to the Holling coupling terms , the form of these equations qualitatively resembles models of vertical , tritrophic food webs\u2014the mean trait value c \u00af affects the growth rate of the prey , which in turn affects the growth rate of the predator 24 , 32 , 43 ., The coupling parameter ya introduces asymmetry into the competition when ya \u2260 1; however , it essentially acts as a scale factor that only affects the amplitude of the y cycles and equilibria rather than the dynamics ., Additionally , because the predator-prey interaction term Eq ( 5 ) is unaffected by the trait , our model contains no triple-product x y c \u00af interaction terms , which typically stabilize the dynamics ., For our analysis of the system ( 10 ) , ( 11 ) and ( 12 ) , we first consider the case where evolution proceeds very slowly relative to population dynamics ., In the case of both no evolution ( V = 0 ) and no predation ( y = 0 ) , the prey growth Eq ( 10 ) advances along the one-dimensional nullcline y \u02d9 , c \u00af \u02d9 = 0 , y = 0 ., Depending on whether the fixed mean trait value c \u00af exceeds a critical value ( c \u00af \u2020 \u2261 d 1 \/ ( a 1 - b 1 d 1 ) ) , the prey density will either grow exponentially ( c \u00af > c \u00af \u2020 ) or collapse exponentially ( c \u00af < c \u00af \u2020 ) because the constant c \u00af remains too low to sustain the prey population in the absence of evolutionary adaptation ., The requirement that c \u00af > c \u00af \u2020 carries over to the case where a predator is added to the system but evolutionary dynamics remain fixed , corresponding to a two dimensional system advancing along the two-dimensional nullcline c \u00af \u02d9 = 0 ., In this case , as long as c \u00af > c \u00af \u2020 , the prey density can exhibit continuous growth or cycling depending in the relative magnitudes of the various parameters in Eqs ( 10 ) and ( 11 ) ., The appearance and disappearance of these cycles is determined by a series of bifurcations that depends on the values of c \u00af and b1 , b2 relative to the remaining parameters a1 , a2 , d1 , d2 ( S1A Appendix ) ., In the full three-variable system ( 10 ) , ( 11 ) and ( 12 ) , c \u00af passes through a range of values as time progresses , resulting in more complex dynamics than those observed in the two-dimensional case ., For very small values of V , the evolutionary dynamics c \u00af \u02d9 are slow enough that the system approaches the equilibrium predicted by the two-variable model with c \u00af constant ., The predator and prey densities initially grow , but the prey trait value does not change fast enough for the prey population growth to sustain\u2014eventually resulting in extinction of both the predator and prey ., However , if V takes a slightly larger value , so that the mean trait value can gradually change with a growing prey population density ( due to the density-dependent term in Eq ( 10 ) ) , then the population dynamics begin to display regular cycling with fixed frequencies and amplitudes ( Fig 2A , top ) ., This corresponds to a case where the evolutionary dynamics are slow compared to the ecological dynamics , but not so slow as to be completely negligible ., Finally , when V is the same order of magnitude as the parameters governing the ecological dynamics , the irregular cycles become fully chaotic , with both amplitudes and frequencies that vary widely over even relatively short time intervals ( Fig 2A , bottom ) ., Typically , the large V case would correspond to circumstances in which the prey population develops a large standing genetic variation 10 , 44 ., That the dynamics are chaotic , rather than quasi-periodic , is suggested by the presence of multiple broad , unevenly-spaced peaks in the power spectrum 45 ( Figure A in S1E Appendix ) , as well as by numerical tabulation of the Lyapunov spectrum ( described further below ) ., Due to the hierarchical coupling of Eqs ( 10 ) , ( 11 ) and ( 12 ) , when plotted in three-dimensions the chaotic dynamics settle onto a strange attractor that resembles the \u201cteacup\u201d attractor found in models of tritrophic food webs 24 , 46 ( Fig 2B ) ., Poincare sections though various planes of the teacup all appear linear , suggesting that the strange attractor is effectively two-dimensional\u2014consistent with pairings of timescales associated with different dynamical variables at different points in the process ( Figure B in S1E Appendix ) ., In the \u201crim\u201d of the teacup , the predator density changes slowly relative to the prey density and mean trait value ., This is visible in a projection of the attractor into the x - c \u00af plane ( Fig 2B , bottom inset ) ., However , in the \u201chandle\u201d of the teacup , the mean trait value varies slowly relative to the ecological dynamics ( c \u00af \u02d9 \u2248 0 ) , resulting in dynamics that qualitatively resemble the two-dimensional \u201creduced\u201d system described above for various fixed values of c \u00af ( Fig 2B , top inset ) ., The structure of the attractor suggests that the prey alternately enters periods of evolutionary change and periods of competition with the predator ., A closer inspection of a typical transition reveals that this \u201ctwo timescale\u201d dynamical separation is responsible for the appearance of chaos in the system ( Fig 3A ) ., As the system explores configuration space , it reaches a metastable configuration corresponding to a high mean trait value c \u00af , which causes the prey density to nearly equilibrate to a low density due to the negative density-dependent term in Eq ( 10 ) ., During this period ( the \u201crim\u201d of the teacup ) , the predator density gradually declines due to the lack of prey ., However , once the predator density becomes sufficiently small , the prey population undergoes a sudden population increase , which triggers a period of rapid cycling in the system ( the \u201chandle\u201d of the teacup attractor ) ., During this time , the predator density continuously increases , causing an equivalent decrease in the prey density that resets the cycle to the metastable state ., The sudden increase in the prey population at low predator densities can be understood from how the fitness function r ( from Eq ( 9 ) ) changes over time ., Fig 3B shows a kymograph of the log-scaled fitness Eq ( 9 ) as a function of individual trait values c , across each timepoint and corresponding set of ( x , y , c \u00af ) values given in panel A . Overlaid on this time-dependent fitness landscape are curves indicating the instantaneous location of the local maximum ( black ) and minimum ( white ) ., By comparing panels A and B , it is apparent that the mean trait value during the \u201cmetastable\u201d period of the dynamics stays near the local maximum of the fitness function , which barely varies as the predator density y changes ., However , when y ( t ) \u2248 0 . 25 , the fitness function changes so that the local minimum and local maximum merge and disappear from the system , leading to a new maximum spontaneously appearing at c = 0 ., Because V is large enough ( for these parameters ) that the gradient dynamics occur over timescales comparable to the competition dynamics , the system tends to move rapidly towards this new maximum in the fitness landscape , resulting in rapidly-changing dynamics in x and c \u00af ., Importantly , because of the symmetric coupling of the prey fitness landscape r to the prey density x , this rapid motion resets the fitness landscape so that the maximum once again occurs at the original value , resulting in a period of rapid cycling ., The fitness landscape at two representative timepoints in the dynamics is shown in Fig 3C ., That the maxima in the fitness Function ( 9 ) suddenly change locations with continuous variation in x , y is a direct consequence of the use of a high-order ( here , quartic ) polynomial in c to describe the fitness landscape ., The quartic represents the simplest analytic function that admits more than one local maxima in its domain , and the number of local maxima is governed by the relative signs of the coefficients of the ( c 2 - c \u00af 2 ) and ( c 4 - c \u00af 4 ) terms in Eq ( 9 ) , which change when the system enters the rapid cycling portion of the chaotic dynamics at t = 500 in Fig 3A ., This transition marks the mean prey trait switching from being drawn ( via the gradient dynamics ) to a single fitness peak at an intermediate value of the trait ceq \u2248 0 . 707 to being drawn instead to one of two peaks: the existing peak , or a new peak at the origin ., Thus the metastable period of the dynamics corresponds to a period of stabilizing selection: if the fitness landscape were frozen in time during this period , then an ensemble of prey would all evolve to a single intermediate trait value corresponding to the location of the global maximum ., Conversely , if the fitness landscape were held fixed in the multipeaked form it develops during a period of rapid cycling , given sufficient time an ensemble of prey would evolve towards subpopulations with trait values at the location of each local fitness maximum\u2014representing disruptive selection ., That the fitness landscape does not remain fixed for extended durations in either a stabilizing or disruptive state\u2014but rather switches between the two states due to the prey density-dependent term in Eq ( 9 ) \u2014 underlies the onset of chaotic cycling in the model ., Density-dependent feedback similarly served to induce chaos in many early discrete-time ecosystem models 23 ., However , the \u201ctwo timescale\u201d form of the chaotic dynamics and strange attractor here is a direct result of reversible transitions between stabilizing and disruptive selection ., If the assumptions underlying the gradient dynamics model do not strictly hold\u2014if the additive genetic variance V slowly varies via an additional dynamical equation , or if the initial conditions are such that significant skewness would be expected to persist in the phenotypic distribution , then the chaotic dynamics studied here would be transient rather than indefinite ., While the general stability analysis shown above ( and in the S1 Appendix ) would still hold , additional dynamical equations for V or for high-order moments of the trait distribution would introduce additional constraints on the values of the parameters , which would ( in general ) increase the opportunities for the dynamics to become unstable and lead to diverging predator or prey densities ., However , in some cases these additional effects may actually serve to stabilize the system against both chaos and divergence ., For example , if additional series terms were included in Eq ( 8 ) such that the dependence of mortality rate on c \u00af and c had an upper asymptote 25 , then c \u00af \u02d9 = 0 would be true for a larger range of parameter values\u2014resulting in the dynamical system remaining planar for a larger range of initial conditions and parameter values , precluding chaos ., The transition between stabilizing and disruptive selection that occurs when the system enters a period of chaotic cycling is strongly reminiscent of a first-order phase transition ., Many physical systems can be described in terms of a free energy landscape , the negative gradient of which determines the forces acting on the system ., Minima of the free energy landscape correspond to equilibrium points of the system , which the dynamical variables will approach with first-order dynamics in an overdamped limit ., When a physical system undergoes a phase transition\u2014a qualitative change in its properties as a single \u201ccontrol\u201d parameter , an externally-manipulable variable such as temperature , is smoothly varied\u2014the transition can be understood in terms of how the control parameter changes the shape of the free energy landscape ., The Landau free energy model represents the simplest mathematical framework for studying such phase transitions: a one-dimensional free energy landscape is defined as a function of the control parameter and an additional independent variable , the \u201corder parameter , \u201d a derived quantity ( such as particle density or net magnetization ) with respect to which the free energy can have local minima or maxima ., In a first-order phase transition in the Landau model , as the control parameter monotonically changes the relative depth of a local minimum at the origin decreases , until a new local minimum spontaneously appears at a fixed nonzero value of the order parameter\u2014resulting in dynamics that suddenly move towards the new minimum , creating discontinuities in thermodynamic properties of the system such as the entropy 47 ., First-order phase transitions are universal physical models , which have been used to describe a broad range of processes spanning from superconductor breakdown 48 to primordial black hole formation in the early universe 49 ., In the predator-prey model with prey evolution , the fitness function is analogous to the free energy , with the individual trait value c serving as the \u201corder parameter\u201d for the system ., The control parameter for the transition is the prey density , x , which directly couples into the dynamics via the density-dependent term in Eq ( 7 ) ., Because the fitness consists of a linear combination of this term in Eq ( 7 ) and a quartic landscape Eq ( 8 ) , the changing prey density \u201ctilts\u201d the landscape and provokes the appearance of the additional , disruptive peak visible in Fig 3C ., The appearance and disappearance of local maxima as the system switches between stabilizing and disruptive selection is thus analogous to a first-order phase transition , with chaotic dynamics being a consequence of repeated increases and decreases of the control parameter x above and below the critical prey densities x* , x** at which the phase transition occurs ., Similar chaotic dynamics emerge from repeated first-order phase transitions in networks of coupled oscillators , which may alternate between synchronized and incoherent states that resemble the \u201cmetastable\u201d and \u201crapid cycling\u201d portions of the predator-prey dynamics 50 ., The analogy between a first-order phase transition and the onset of disruptive selection can be used to study the chaotic dynamics in terms of dynamical hysteresis , a defining feature of such phase transitions 47 ., For different values of x , the three equilibria corresponding to the locations of the local minima and maxima of the fitness landscape , ceq , can be calculated from the roots of the cubic in Eq ( 12 ) ., The resulting plots of ceq vs x in Fig 4 are generated by solving for the roots in the limit of fast prey equilibration , c \u00af \u2192 c e q , which holds in the vicinity of the equilibria ( S1B Appendix ) ., The entry into the transient chaotic cycling occurs when x increases gradually and shifts ceq with it; x eventually attains a critical value x* ( x* \u2248 0 . 45 for the parameters used in the figures ) , causing ceq to jump from its first critical value c* to the origin ( the red \u201cforward\u201d branch in Fig 4 ) ., This jump causes rapid re-equilibration of c \u00af ( t ) , resulting in the rapid entry into cycling observable in Fig 3A ., However , x cannot increase indefinitely due to predation; rather , it decreases until it reaches a second critical value x** , at which point ceq jumps back from the origin to a positive value ( the blue \u201creturn\u201d branch in Fig 4; x** = 0 . 192 for these parameter values ) ., This second critical point marks the return to the metastable dynamics in Fig 3A ., This asymmetry in the forward and backwards dynamics of x lead to dynamical time-irreversibility ( hysteresis ) and the jagged , sawtooth-like cycles visible in the dynamics of the full system ., Because the second jump in ceq is steeper , the parts of the trajectories associated with the \u201creturn\u201d transition in Fig 3A appear steeper ., Additionally , the maximum value obtained by c \u00af ( t ) anywhere on the attractor , c e q m a x , is determined by the limiting value o","headings":"Introduction, Model, Results, Discussion","abstract":"In many ecosystems , natural selection can occur quickly enough to influence the population dynamics and thus future selection ., This suggests the importance of extending classical population dynamics models to include such eco-evolutionary processes ., Here , we describe a predator-prey model in which the prey population growth depends on a prey density-dependent fitness landscape ., We show that this two-species ecosystem is capable of exhibiting chaos even in the absence of external environmental variation or noise , and that the onset of chaotic dynamics is the result of the fitness landscape reversibly alternating between epochs of stabilizing and disruptive selection ., We draw an analogy between the fitness function and the free energy in statistical mechanics , allowing us to use the physical theory of first-order phase transitions to understand the onset of rapid cycling in the chaotic predator-prey dynamics ., We use quantitative techniques to study the relevance of our model to observational studies of complex ecosystems , finding that the evolution-driven chaotic dynamics confer community stability at the \u201cedge of chaos\u201d while creating a wide distribution of opportunities for speciation during epochs of disruptive selection\u2014a potential observable signature of chaotic eco-evolutionary dynamics in experimental studies .","summary":"Evolution is usually thought to occur very gradually , taking millennia or longer in order to appreciably affect a species survival mechanisms ., Conversely , demographic shifts due to predator invasion or environmental change can occur relatively quickly , creating abrupt and lasting effects on a species survival ., However , recent studies of ecosystems ranging from the microbiome to oceanic predators have suggested that evolutionary and ecological processes can often occur over comparable timescales\u2014necessitating that the two be addressed within a single , unified theoretical framework ., Here , we show that when evolutionary effects are added to a minimal model of two competing species , the resulting ecosystem displays erratic and chaotic dynamics not typically observed in such systems ., We then show that these chaotic dynamics arise from a subtle analogy between the evolutionary concept of fitness , and the concept of the free energy in thermodynamical systems ., This analogy proves useful for understanding quantitatively how the concept of a changing fitness landscape can confer robustness to an ecosystem , as well as how unusual effects such as history-dependence can be important in complex real-world ecosystems ., Our results predict a potential signature of a chaotic past in the distribution of timescales over which new species can emerge during the competitive dynamics , a potential waypoint for future experimental work in closed ecosystems with controlled fitness landscapes .","keywords":"ecology and environmental sciences, predator-prey dynamics, population dynamics, systems science, mathematics, population biology, thermodynamics, computer and information sciences, ecosystems, dynamical systems, free energy, community ecology, physics, population metrics, ecology, predation, natural selection, trophic interactions, biology and life sciences, physical sciences, population density, evolutionary biology, evolutionary processes","toc":null} +{"Unnamed: 0":1522,"id":"journal.pgen.1000098","year":2008,"title":"Evaluating Statistical Methods Using Plasmode Data Sets in the Age of Massive Public Databases: An Illustration Using False Discovery Rates","sections":"\u201cOmic\u201d technologies ( genomic , proteomic , etc . ) have led to high dimensional experiments ( HDEs ) that simultaneously test thousands of hypotheses ., Often these omic experiments are exploratory , and promising discoveries demand follow-up laboratory research ., Data from such experiments require new ways of thinking about statistical inference and present new challenges ., For example , in microarray experiments an investigator may test thousands of genes aiming to produce a list of promising candidates for differential genetic expression across two or more treatment conditions ., The larger the list , the more likely some genes will prove to be false discoveries , i . e . genes not actually affected by the treatment ., Statistical methods often estimate both the proportion of tested genes that are differentially expressed due to a treatment condition and the proportion of false discoveries in a list of genes selected for follow-up research ., Because keeping the proportion of false discoveries small ensures that costly follow-on research will yield more fruitful results , investigators should use some statistical method to estimate or control this proportion ., However , there is no consensus on which of the many available methods to use 1 ., How should an investigator choose ?, Although the performance of some statistical methods for analyzing HDE data has been evaluated analytically , many methods are commonly evaluated using computer simulations ., An analytical evaluation ( i . e . , one using mathematical derivations to assess the accuracy of estimates ) may require either difficult-to-verify assumptions about a statistical model that generated the data or a resort to asymptotic properties of a method ., Moreover , for some methods an analytical evaluation may be mathematically intractable ., Although evaluations using computer simulations may overcome the challenge of intractability , most simulation methods still rely on the assumptions inherent in the statistical models that generated the data ., Whether these models accurately reflect reality is an open question , as is how to determine appropriate parameters for the model , what realistic \u201ceffect sizes\u201d to incorporate in selected tests , as well as if and how to incorporate correlation structure among the many thousands of observations per unit 2 ., Plasmode data sets may help overcome the methodological challenges inherent in generating realistic simulated data sets ., Catell and Jaspers 3 made early use of the term when they defined a plasmode as \u201ca set of numerical values fitting a mathematico-theoretical model . That it fits the model may be known either because simulated data is produced mathematically to fit the functions , or because we have a real\u2014usually mechanical\u2014situation which we know with certainty must produce data of that kind . \u201d, Mehta et al . ( p . 946 ) 2 more concisely refer to a plasmode as \u201ca real data set whose true structure is known . \u201d, The plasmodes can accommodate unknown correlation structures among genes , unknown distributions of effects among differentially expressed genes , an unknown null distribution of gene expression data , and other aspects that are difficult to model using theoretical distributions ., Not surprisingly , the use of plasmode data sets is gaining traction as a technique of simulating reality-based data from HDEs 4 ., A plasmode data set can be constructed by spiking specific mRNAs into a real microarray data set 5 ., Evaluating whether a particular method correctly detects the spiked mRNAs provides information about the methods ability to detect gene expression ., A plasmode data set can also be constructed by using a current data set as a template for simulating new data sets for which some truth is known ., Although in early microarray experiments , sample sizes were too small ( often only 2 or 3 arrays per treatment condition ) to use as a basis for a population model for simulating data sets , larger HDE data sets have recently become publicly available , making their use feasible for simulation experiments ., In this paper , we propose a technique to simulate plasmode data sets from previously produced data ., The source-data experiment was conducted at the Center for Nutrient\u2013Gene Interaction ( CNGI , www . uab . edu\/cngi ) , at the University of Alabama at Birmingham ., We use a data set from this experiment as a template for producing a plasmode null data set , and we use the distribution of effect sizes from the experiment to select expression levels for differentially expressed genes ., The technique is intuitively appealing , relatively straightforward to implement , and can be adapted to HDEs in contexts other than microarray experiments ., We illustrate the value of plasmodes by comparing 15 different statistical methods for estimating quantities of interest in a microarray experiment , namely the proportion of true nulls ( hereafter denoted \u03c00 ) , the false discovery rate ( FDR ) 6 and a local version of FDR ( LFDR ) 7 ., This type of analysis enables us , for the first time , to compare key omics research tools according to their performance in data that , by definition , are realistic exemplars of the types of data biologists will encounter ., The illustrations given here provide some insight into the relative performance characteristics of the 15 methods in some circumstances , but definitive claims regarding uniform superiority of one method over another would require more extensive evaluations over multiple types of data sets ., Steps for plasmode creation that are described herein are relatively straightforward ., First , an HDE data set is obtained that reflects the type of experiment for which statistical methods will be used to estimate quantities of interest ., Data from a rat microarray experiment at CNGI were used here ., Other organisms might produce data with different structural characteristics and methods may perform differently on such data ., The CNGI data were obtained from an experiment that used rats to test the pathways and mechanisms of action of certain phytoestrogens 8 , 9 ., In brief , rats were divided into two large groups , the first sacrificed at day 21 ( typically the day of weaning for rats ) , the second sacrificed at day 50 ( the day , corresponding to late human puberty , when rats are most susceptible to chemically induced breast cancer ) ., Each of these groups was subdivided into smaller groups according to diet ., At 21 and 50 days , respectively , the relevant tissues from these rat groups were appropriately processed , and gene expression levels were extracted using GCOS ( GeneChip Operating Software ) ., We exported the microarray image ( * . CEL ) files from GCOS and analyzed them with the Affymetrix Package of Bioconductor\/R to extract the MAS 5 . 0 processed expression intensities ., The arrays and data were investigated for outliers using Pearsons correlation , spatial artifacts 10 and a deleted residuals approach 11 ., It is important to note that only one normalization method was considered , but the methods could be compared on RMA normalized data as well ., In fact , comparisons of methods performances on data from different normalization techniques could be done using the plasmode technique ., Second , an HDE data set that compares effect of a treatment ( s ) is analyzed and the vector of effect sizes is saved ., The effect size used here was a simple standardized mean difference ( i . e . , a two sample t-statistics ) but any meaningful metric could be used ., Plasmodes , in fact , could be used to compare the performance of statistical methods when different statistical tests were used to produce the P-values ., We chose two sets of HDE data as templates to represent two distributions of effect sizes and two different null distributions ., We refer to the 21-day experiment using the control group ( 8 arrays ) and the treatment group ( EGCG supplementation , 10 arrays ) as data set 1 , and the 50-day experiment using the control group ( 10 arrays ) and the treatment group ( Resveratrol supplementation , 10 arrays ) as data set, 2 . There were 31 , 042 genes on each array , and two sample pooled variance t-tests for differential expression were used to create a distribution of P-values ., Histograms of the distributions for both data sets are shown in Figure, 1 . The distribution of P-values for data set 1 shows a stronger signal ( i . e . , a larger collection of very small P-values ) than that for data set 2 , suggesting either that more genes are differentially expressed or that those that are expressed have a larger magnitude treatment effect ., This second step provided a distribution of effects sizes from each data set ., Next , create the plasmode null data set ., For each of the HDE data sets , we created a random division of the control group of microarrays into two sets of equal size ., One consideration in doing so is that if some arrays in the control group are \u2018different\u2019 from others due to some artifact in the experiment , then the null data set can be sensitive to how the arrays are divided into two sets ., Such artifacts can be present in data from actual HDEs , so this issue is not a limitation of plasmode use but rather an attribute of it , that is , plasmodes are designed to reflect actual structure ( including artifacts ) in a real data set ., We obtained the plasmode null data set from data set 1 by dividing the day 21 control group of 8 arrays into two sets of 4 , and for data set 2 by dividing the control group of 10 arrays into two sets of 5 arrays ., Figure 2 shows the two null distributions of P-values obtained using the two sample t-test on the plasmode null data sets ., Both null distributions are , as expected , approximately uniform , but sampling variability allows for some deviation from uniformity ., A proportion 1\u2212\u03c00 of effect sizes were then sampled from their respective distributions using a weighted probability sampling technique described in the Methods section ., What sampling probabilities are chosen can be a tuning parameter in the plasmode creation procedure ., The selected effects were incorporated into the associated null distribution for a randomly selected proportion 1\u2212\u03c00 of genes in a manner also described in the Methods section ., What proportion of genes is selected may depend upon how many genes in an HDE are expected to be differentially expressed ., This may determine whether a proportion equal to 0 . 01 or 0 . 5 is chosen to construct a plasmode ., Proportions between 0 . 05 and 0 . 2 were used here as they are in the range of estimated proportions of differentially expressed genes that we have seen from the many data sets we have analyzed ., Finally , the plasmode data set was analyzed using a selected statistical method ., We used two sample t-tests to obtain a plasmode distribution of P-values for each plasmode data set because the methods compared herein all analyze a distribution of P-values from an HDE ., P-values were declared statistically significant if smaller than a threshold \u03c4 ., Box 1 summarizes symbol definitions ., When comparing the 15 statistical methods , we used three values of \u03c00 ( 0 . 8 , 0 . 9 , and 0 . 95 ) and two thresholds ( \u03c4\\u200a=\\u200a0 . 01 and 0 . 001 ) ., For each choice of \u03c00 and threshold \u03c4 , we ran B\\u200a=\\u200a100 simulations ., All 15 methods provided estimates of \u03c00 , 14 provided estimates of FDR , and 7 provided estimates of LFDR ., Because the true values of \u03c00 and FDR are known for each plasmode data set , we can compare the accuracy of estimates from the different methods ., There are two basic strategies for estimating FDR , both predicated on an estimated value for \u03c00 , the first using equation ( 1 ) below , the second using a mixture model approach ., Let PK\\u200a=\\u200aM\/K be the proportion of tests that were declared significant at a given threshold , where M and K were defined with respect to quantities in Table, 1 . Then one estimate for FDR at this threshold is , ( 1 ) The mixture model ( usually a two-component mixture ) approach uses a model of the form , ( 2 ) where f is a density , p represents a P-value , f0 a density of a P-value under the null hypothesis , f1 a density of a P-value under the alternative hypothesis , \u03c00 is interpreted as before , and \u03b8 a ( possibly vector ) parameter of the distribution ., Since valid P-values are assumed , f0 is a uniform density ., LFDR is defined with respect to this mixture model as , ( 3 ) FDR is defined similarly except that the densities in ( 3 ) are replaced by the corresponding cumulative distribution functions ( CDF ) , that is , ( 4 ) where F1 ( \u03c4 ) is the CDF under the alternative hypothesis , evaluated at a chosen threshold \u03c4 ., ( There are different definitions of FDR and the definition in ( 4 ) is , under some conditions , the definition of a positive false discovery rate 12 ., However , in cases with a large number of genes many of the variants of FDR are very close 13 ) ., The methods are listed for quick reference in Table, 2 . Methods 1\u20138 use different estimates for \u03c00 and , as implemented herein , proceed to estimate FDR using equation ( 1 ) ., Method 9 uses a unique algorithm to estimate LFDR and does not supply an estimate of FDR ., Methods 10\u201315 are based on a mixture model framework and estimate FDR and LFDR using equations ( 3 ) and ( 4 ) where the model components are estimated using different techniques ., All methods were implemented using tuning parameter settings from the respective paper or ones supplied as default values with the code in cases where the code was published online ., First , to compare their differences , we used the 15 methods to analyze the original two data sets , with data set 1 having a \u201cstronger signal\u201d ( i . e . , lower estimates of \u03c00 and FDR ) ., Estimates of \u03c00 from methods 3 through 15 ranged from 0 . 742 to 0 . 837 for data set 1 and 0 . 852 to 0 . 933 for data set, 2 . ( Methods 1 and 2 are designed to control for rather than estimate FDR and are designed to be conservative; hence , their estimates were much closer to, 1 . ) Results of these analyses can be seen in the Supplementary Tables S1 and S2 ., Next , using the two template data sets we constructed plasmode data sets in order to compare the performance of the 15 methods for estimating \u03c00 ( all methods ) , FDR ( all methods except method 9 ) , and LFDR ( methods 9\u201315 ) ., Figures 3 and 4 show some results based on data set, 2 . More results are available in the Figures S1 , S2 , S3 , S4 , S5 , and S6 ., Figure 3 shows the distribution of 100 estimates for \u03c00 using data set 2 when the true value of \u03c00 is equal to 0 . 8 and 0 . 9 ., Methods 1 and 2 are designed to be conservative ( i . e . , true values are overestimated ) ., With a few exceptions , the other methods tend to be conservative when \u03c00\\u200a=\\u200a0 . 8 and liberal ( the true value is underestimated ) when \u03c00\\u200a=\\u200a0 . 9 ., The variability of estimates for \u03c00 is similar across methods , but some plots show a slightly larger variability for methods 12 and 15 when \u03c00\\u200a=\\u200a0 . 9 ., Figure 4 shows the distribution of estimates for FDR and LFDR at the two thresholds ., The horizontal lines in the plots show the mean ( solid line ) and the minimum and maximum ( dashed lines ) of the true FDR value for the 100 simulations ., A true value for LFDR is not known in the simulation procedure ., The methods tend to be conservative ( overestimate FDR ) when the threshold \u03c4\\u200a=\\u200a0 . 01 and are more accurate at the lower threshold ., Estimates of FDR are more variable for methods 11 , 13 , and 14 and estimates for LFDR more variable for methods 13 and 14 , with the exception of a few unusual estimates obtained from method 9 ., The high variability of FDR estimates from method 11 may be due to a \u201cless than optimal\u201d choice of the spanning parameter in a numerical smoother ( see also Pounds and Cheng 27 ) ., We did not attempt to tune any of the methods for enhanced performance ., Researchers have been evaluating the performance of the burgeoning number of statistical methods for the analysis of high dimensional omic data , relying on a mixture of mathematical derivations , computer simulations , and sadly , often single dataset illustrations or mere ipse dixit assertions ., Recognizing that the latter two approaches are simply unacceptable approaches to method validation 2 and that the first two suffer from limitations described earlier , an increasing number of investigators are turning to plasmode datasets for method evaluation 28 ., An excellent example is the Affycomp website ( http:\/\/affycomp . biostat . jhsph . edu\/ ) that allows investigators to compare different microarray normalization methods on datasets of known structure ., Other investigators have also recently used plasmode-like approaches which they refer to as \u2018data perturbation\u2019 29 , 30 , yet it is not clear that these \u2018perturbed datasets\u2019 can distinguish true from false positives , suggesting greater need for articulation of principles or standards of plasmode generation ., As more high dimensional experiments with larger sample sizes become available , researchers can use a new kind of simulation experiment to evaluate the performance of statistical analysis methods , relying on actual data from previous experiments as a template for generating new data sets , referred to herein as plasmodes ., In theory , the plasmode method outlined here will enable investigators to choose on an empirical basis the most appropriate statistical method for their HDEs ., Our results also suggest that large , searchable databases of plasmode data sets would help investigators find existing data sets relevant to their planned experiments ., ( We have already implemented a similar idea for planning sample size requirements in HDEs 31 , 32 . ), Investigators could then use those data sets to compare and evaluate several analytical methods to determine which best identifies genes affected by the treatment condition ., Or , investigators could use the plasmode approach on their own data sets to glean some understanding of how well a statistical method works on their type of data ., Our results compare the performance of 15 statistical methods as they process the specific plasmode data sets constructed from the CNGI data ., Although identifying one uniformly superior method ( if there is one ) is difficult within the limitations of this one comparison , our results suggest that certain methods could be sensitive to tuning parameters or different types of data sets ., A comparison over multiple types of source data sets with different distributions of effects sizes could add the detail necessary to clearly recommend certain methods over others 1 ., Other papers have used simulation studies to compare the performance of methods for estimating \u03c00 and FDR ( e . g . , Hsueh et al . 33; Nguyen 34; Nettleton et al . 35 ) ., We compared methods that use the distribution of P-values as was done in Broberg 36 and Yang and Yang 37 ., Unlike our plasmode approach , most earlier comparison studies used normal distributions to simulate gene expression data and incorporated dependence using a block diagonal correlation structure as in Allison et al 26 ., A key implication and recommendation of our paper is that , as data from the growing number of HDEs is made publicly available , researchers may identify a previous HDE similar to one they are planning or have recently conducted and use data from these experiments to construct plasmode data sets with which to evaluate candidate statistical methods ., This will enable investigators to choose the most appropriate method ( s ) for analyzing their own data and thus to increase the reliability of their research results ., In this manner , statistical science ( as a discipline that studies the methods of statistics ) becomes as much an empirical science as a theoretical one ., The quantities in Table 1 are those for a typical microarray experiment ., Let N\\u200a=\\u200aA+B and M\\u200a=\\u200aC+D and note that both N and M will be known and K\\u200a=\\u200aN+M ., However , the number of false discoveries is equal to an unknown number C . The proportion of false discoveries for this experiment is C\/M ., Benjamini and Hochberg 6 defined FDR as , P ( M>0 ) where I{M>0} is an indicator function equal to 1 if M>0 and zero otherwise ., Storey 12 defined the positive FDR as ., Since P ( M>0 ) \u22651\u2212 ( 1\u2212\u03c4 ) K , and since K is usually very large , FDR\u2248pFDR , so we do not distinguish between FDR and pFDR as the parameter being estimated and simply refer to it as FDR with estimates denoted ( and ) ., Suppose we identify a template data set corresponding to a two treatment comparison for differential gene expression for K genes ., Obtain a vector , \u03b4 , of effect sizes ., One suggestion is the usual t-statistic , where the ith component of \u03b4 , is given by ( 5 ) where ntrt , nctrl are number of biological replicates in the treatment and control group , respectively , X\u0305i , trt , X\u0305i , ctrl are the mean gene expression levels for gene i in treatment and control groups , and , is the usual pooled sample variance for the ith gene , where the two sample variances are given by , ., In what follows , we will use this choice for \u03b4i since it allows for effects to be described by a unitless quantity , i . e . , it is scaled by the standard error of the observed mean difference X\u0305i , trt\u2212X\u0305i , ctrl for each gene ., For convenience , assume that nctrl is an even number and divide the control group into two sets of equal size ., Requiring nctrl\u22654 allows for at least two arrays in each set , thus allowing estimates of variance within each of the two sets ., This will be the basis for the plasmode \u201cnull\u201d data set ., There are ways of making this division ., Without loss of generality , assume that the first nctrl\/2 arrays after the division are the plasmode control group and the second nctrl\/2 are the plasmode treatment group ., Specify a value of \u03c00 and specify a threshold , \u03c4 , such that a P-value \u2264\u03c4 is declared evidence of differential expression ., Execute the following steps ., One can then obtain another data set and repeat the entire process to evaluate a method on a different type of data , perhaps from a different organism having a different null distribution , or a different treatment type giving a different distribution of effect sizes , \u03b4 ., Alternatively , one might choose to randomly divide the control group again and repeat the entire process ., This would help assess how differences in arrays within a group or possible correlation structure might affect results from a method ., If some of the arrays in the control group have systematic differences among them ( e . g . , differences arising from variations in experimental conditions\u2014day , operator , technology , etc . ) , then the null distribution can be sensitive to the random division of the original control group into the two plasmode groups , particularly if nctrl is small .","headings":"Introduction, Results, Discussion, Methods","abstract":"Plasmode is a term coined several years ago to describe data sets that are derived from real data but for which some truth is known ., Omic techniques , most especially microarray and genomewide association studies , have catalyzed a new zeitgeist of data sharing that is making data and data sets publicly available on an unprecedented scale ., Coupling such data resources with a science of plasmode use would allow statistical methodologists to vet proposed techniques empirically ( as opposed to only theoretically ) and with data that are by definition realistic and representative ., We illustrate the technique of empirical statistics by consideration of a common task when analyzing high dimensional data: the simultaneous testing of hundreds or thousands of hypotheses to determine which , if any , show statistical significance warranting follow-on research ., The now-common practice of multiple testing in high dimensional experiment ( HDE ) settings has generated new methods for detecting statistically significant results ., Although such methods have heretofore been subject to comparative performance analysis using simulated data , simulating data that realistically reflect data from an actual HDE remains a challenge ., We describe a simulation procedure using actual data from an HDE where some truth regarding parameters of interest is known ., We use the procedure to compare estimates for the proportion of true null hypotheses , the false discovery rate ( FDR ) , and a local version of FDR obtained from 15 different statistical methods .","summary":"Plasmode is a term used to describe a data set that has been derived from real data but for which some truth is known ., Statistical methods that analyze data from high dimensional experiments ( HDEs ) seek to estimate quantities that are of interest to scientists , such as mean differences in gene expression levels and false discovery rates ., The ability of statistical methods to accurately estimate these quantities depends on theoretical derivations or computer simulations ., In computer simulations , data for which the true value of a quantity is known are often simulated from statistical models , and the ability of a statistical method to estimate this quantity is evaluated on the simulated data ., However , in HDEs there are many possible statistical models to use , and which models appropriately produce data that reflect properties of real data is an open question ., We propose the use of plasmodes as one answer to this question ., If done carefully , plasmodes can produce data that reflect reality while maintaining the benefits of simulated data ., We show one method of generating plasmodes and illustrate their use by comparing the performance of 15 statistical methods for estimating the false discovery rate in data from an HDE .","keywords":"biotechnology, mathematics, science policy, computational biology, molecular biology, genetics and genomics","toc":null} +{"Unnamed: 0":931,"id":"journal.pcbi.1006166","year":2018,"title":"Variability in pulmonary vein electrophysiology and fibrosis determines arrhythmia susceptibility and dynamics","sections":"Success rates for catheter ablation of persistent atrial fibrillation ( AF ) patients are currently low; however , there is a subset of patients for whom pulmonary vein isolation ( PVI ) alone is a successful treatment strategy 1 ., PVI ablation may work by preventing triggered beats from entering the left atrial body , or by converting rotors or functional reentry around the left atrial\/pulmonary vein ( LA\/PV ) junction to anatomical reentry around a larger circuit , potentially converting AF to a simpler tachycardia 2 ., It is difficult to predict whether PVI represents a sufficient treatment strategy for a given patient with persistent AF 1 , and it is unclear what to do for the majority of patients for whom it is not effective ., Patients with AF exhibit distinct properties in effective refractory period ( ERP ) and conduction velocity ( CV ) in the PVs ., For example , paroxysmal AF patients have shorter ERP and longer conduction delays compared to control patients 3 ., AF patients show a number of other differences to control patients: PVs are larger 4; PV fibrosis is increased; and fiber direction may be more disorganised , particularly at the PV ostium 5 ., There are also differences within patient groups; for example , patients for whom persistent AF is likely to terminate after PVI have a larger ERP gradient compared to those who require further ablation 1 , 3 ., Electrical driver location changes as AF progresses; drivers ( rotors or focal sources ) are typically located close to the PVs in early AF , but are also located elsewhere in the atria with longer AF duration 6 ., Atrial fibrosis is a major factor associated with AF and modifies conduction ., However , there is conflicting evidence on the relationship between fibrosis distribution and driver location 7 , 8 ., It is difficult to clinically separate the individual effects of these factors on arrhythmia susceptibility and maintenance ., We hypothesise that the combination of PV properties and atrial body fibrosis determines driver location and , thus , the likely effectiveness of PVI ., In this study , we tested this hypothesis by using computational modelling to gain mechanistic insight into the individual contribution of PV ERP , CV , fiber direction , fibrosis and anatomy on arrhythmia susceptibility and dynamics ., We incorporated data on APD ( action potential duration , as a surrogate for ERP ) and CV for the PVs to determine mechanisms underlying arrhythmia susceptibility , by testing inducibility from PV ectopic beats ., We also predicted driver location , and PVI outcome ., All simulations were performed using the CARPentry simulator ( available at https:\/\/carp . medunigraz . at\/carputils\/ ) ., We used a previously published bi-atrial bilayer model 9 , which consists of resistively coupled endocardial and epicardial surfaces ., This model incorporates detailed atrial structure and includes transmural heterogeneity at a similar computational cost to surface models ., We chose to use a bilayer model rather than a volumetric model incorporating thickness for this study because of the large numbers of parameters investigated , which was feasible with the reduced computational cost of the bilayer model ., As previously described , the bilayer model was constructed from computed tomography scans of a patient with paroxysmal AF , which were segmented and meshed to create a finite element mesh suitable for electrophysiology simulations ., Fiber information was included in the model using a semi-automatic rule based method that matches histological descriptions of atrial fiber orientation 10 ., The left atrium of the bilayer model consists of linearly coupled endocardial and epicardial layers , while the right atrium is an epicardial layer , with endocardial atrial structures including the pectinate muscles and crista terminalis ., The left and right atrium of the model are electrically connected through three pathways: Bachmann\u2019s bundle , the coronary sinus and the fossa ovalis ., Tissue conductivities were tuned to human activation mapping data from Lemery et al . 9 , 11 ., The Courtemanche-Ramirez-Nattel human atrial ionic model was used with changes representing electrical remodelling during persistent AF 12 , together with a doubling of sodium conductance to produce realistic action potential upstroke velocities 9 , and a decrease in IK1 by 20% to match clinical restitution data 13 ., Regional heterogeneity in repolarisation was included by modifying ionic conductances of the cellular model , as described in Bayer et al . 14 , which follows Aslanidi et al . and Seemann et al . 15 , 16 ., Parameters for the baseline PV model were taken from Krueger et al . 17 ., The following PV properties were varied as shown in schematic Fig 1: APD , CV , fiber direction , the inclusion of fibrosis in the PVs and the atrial geometry ., These are described in the following sections ., To investigate the effects of PV length and diameter on arrhythmia inducibility and arrhythmia dynamics , bi-atrial bilayer meshes were constructed from MRI data for twelve patients ., All patients gave written informed consent; this study is in accordance with the Declaration of Helsinki , and approved by the Institutional Ethics Committee at the University of Bordeaux ., Patient-specific models with electrophysiological heterogeneity and fiber direction were constructed using our modelling pipeline , which uses a universal atrial coordinate system to map scalar and vector data from the original bilayer model to a new patient specific mesh ., Late gadolinium enhancement MRI ( average resolution 0 . 625mm x 0 . 625mm x 2 . 5mm ) was performed using a 1 . 5T system ( Avanto , Siemens Medical Solutions , Erlangen , Germany ) ., These LGE-MRI data were manually segmented using the software MUSIC ( Electrophysiology and Heart Modeling Institute , University of Bordeaux , Bordeaux France , and Inria , Sophia Antipolis , France , http:\/\/med . inria . fr ) ., The resulting endocardial surfaces were meshed ( using the Medical Imaging Registration Toolkit mcubes algorithm 18 ) and cut to create open surfaces at the mitral valve , the four pulmonary veins , the tricuspid valve , and each of the superior vena cava , the inferior vena cava and the coronary sinus using ParaView software ( Kitware , Clifton Park , NY , USA ) ., The meshes were then remeshed using mmgtools meshing software ( http:\/\/www . mmgtools . org\/ ) , with parameters chosen to produce meshes with an average edge length of 0 . 34mm to match the resolution of the previously published bilayer model 9 ., Two atrial coordinates were defined for each of the LA and RA , which allow automatic transfer of atrial structures to the model , such as the pectinate muscles and Bachmann\u2019s bundle ., These coordinates were also used to map fiber directions to the bilayer model ., To investigate the effects of PV electrophysiology on arrhythmia inducibility and dynamics , we varied PV APD and CV by modifying the value of the inward rectifier current ( IK1 ) conductance and tissue level conductivity respectively ., IK1 conductance was chosen in this case to investigate macroscopic differences in APD 19 , although several ionic conductances are known to change with AF 20 ., Modifications were either applied homogeneously or following a ostial-distal gradient ., This gradient was implemented by calculating geodesic distances from the rim of mesh nodes at the distal PV boundary to all nodes in the PV and from the rim of nodes at the LA\/PV junction to all nodes in the PV ., The ratio of these two distances was then used as a distance parameter from the LA\/PV junction to the distal end of the PV ( see Fig 1 ) ., IK1 conductance was multiplied by a value in the range 0 . 5\u20132 . 5 , resulting in PV APDs in the clinical range of 100\u2013190ms 3 , 21 , 22 ., This rescaling was either a homogeneous change or followed a gradient along the PV length ., Gradients of IK1 conductance varied from the baseline value at the LA\/PV junction , to a maximum scaling factor at the distal boundary ., PV APDs are reported at 90% repolarisation for a pacing cycle length of 1000ms ., LA APD is 185ms , measured at a LA pacing cycle length of 200ms ., To cover the clinically observed range of PV CVs , longitudinal and transverse tissue conductivities were divided by 1 , 2 , 3 or 5 , resulting in CVs , measured along the PV axis , in the range: 0 . 28\u20130 . 67m\/s 3 , 21\u201324 ., To model heterogeneous conduction slowing , conductivities were varied as a function of distance from the LA\/PV junction , ranging from baseline at the junction to a maximum rescaling ( minimum conductivity ) at the distal boundary ., The direction of this gradient was also reversed to model conduction slowing at the LA\/PV junction 5 ., Motivated by the findings of Hocini et al . 5 , interstitial fibrosis was modelled for the PVs with a density varying along the vein , increasing from the LA\/PV junction to the distal boundary ., This was implemented by randomly selecting edges of elements of the mesh with probability scaled by the distance parameter and the angle of the edge compared to the element fiber direction , where edges in the longitudinal fiber direction were four times more likely to be selected than those in the transverse direction , following our previous methodology 25 ., To model microstructural discontinuities , no flux boundary conditions were applied along the connected edge networks , following Costa et al . 26 ., An example of modelled PV interstitial fibrosis is shown in S1A Fig . For a subset of simulations , interstitial fibrosis was incorporated in the biatrial model based on late gadolinium enhancement ( LGE ) -MRI data , using our previously published methodology 25 ., In brief , likelihood of interstitial fibrosis depended on both LGE intensity and the angle of the edge compared to the element fiber direction ( see S1B Fig ) ., LGE intensity distributions were either averaged over a population of patients 27 , or for an individual patient ., The averaged distributions were for patients with paroxysmal AF ( averaged over 34 patients ) , or persistent AF ( averaged over 26 patients ) ., For patient-specific simulations , the model arrhythmia dynamics were compared to AF recordings from a commercially available non-invasive ECGi mapping technology ( CardioInsight Technologies Inc . , Cleveland , OH ) for which phase mapping analysis was performed as previously described 28 ., PV fiber direction shows significant inter-patient variability ., Endocardial and epicardial fiber direction in the four PVs was modified according to fiber arrangements described in the literature 5 , 29 , 30 ., Six arrangements were considered , as follows:, 1 . circular arrangement on both the endocardium and epicardium;, 2 . spiralling arrangement on both the endocardium and epicardium;, 3 . circular arrangement on the endocardium , with longitudinal epicardial fibers;, 4 . fibers progress from longitudinal at the distal vein to circumferential at the ostium , with identical endocardial and epicardial fibers;, 5 . epicardial layer fibers as per case 4 , with circumferential endocardial fibers;, 6 . as per case 4 , but with a chaotic fiber arrangement at the LA\/PV junction ., These fiber distributions are shown in S2 Fig . Cases 4\u20136 were implemented by setting the fiber angle to be a function of the distance along the vein , measured from the LA\/PV junction to the distal boundary , varying from circumferential at the junction to longitudinal at the distal end ( representing a change of 90 degrees ) ., The disorder in fiber direction at the LA\/PV junction for case 6 was implemented by taking the fibers of case 4 and adding independent standard Gaussian distributions scaled by the distance from the distal boundary , resulting in the largest perturbations at the ostium ., Arrhythmia inducibility was tested by extrastimulus pacing from each of the four PVs individually using a clinically motivated protocol 31 , to simulate the occurrence of PV ectopics ., Simulations were performed for each of the PVs , to determine the effects of ectopic beat location on inducibility ., Sinus rhythm was simulated by stimulating the sinoatrial node region of the model at a cycle length of 700ms throughout the simulation ., Each PV was paced individually with five beats at a cycle length of 160ms , and coupling intervals between the first PV beat and a sinus rhythm beat in the range 200\u2013500 ms . Thirty-two pacing protocols were applied for each model set up: eight coupling intervals ( coupling interval = 200 , 240 , 280 , 320 , 360 , 400 , 440 , 480ms ) , for each of the four PVs ., Inducibility is reported as the proportion of cases resulting in reentry; termed the inducibility ratio ., The effects of PVI were determined for model set-ups that used the original bilayer geometry and in which the arrhythmia lasted for greater than two seconds ., PVI was applied two seconds post AF initiation in each case by setting the tissue conductivity close to zero ( 0 . 001 S\/m ) in the regions shown in S3 Fig . For each case , ten seconds of arrhythmia data were analysed , starting from two seconds post AF initiation , to identify re-entrant waves and wavefront break-up using phase ., The phase of the transmembrane voltage was calculated for each node of the mesh using the Hilbert transform , following subtraction of the mean 32 ., Phase singularities ( PSs ) for the transmembrane potential data were identified by calculating the topological charge of each element in the mesh 33 , and PS spatial density maps were calculated using previously published methods 14 ., PS density maps were then partitioned into the LA body , PV regions , and the RA to assess where drivers were located in relation to the PVs ( see S3 Fig ) ., The PV region was defined as the areas enclosed by , and including , the PVI lines; the LA region was then the rest of the LA and left atrial appendage ., The PV PS density ratio was then defined as the total PV PS count divided by the total model PS count over both atria ., A difference in APD between the model LA and PVs was required for AF induction ., Modelling the PVs using LA cellular properties resulted in non-inducibility , whereas , modelling the LA using PV cellular properties resulted in either non-inducibility or macroreentry ., The effects of modifying PV APD homogeneously or following a gradient are shown in Table 1 ., Simulations in which PV APD was longer than LA APD were non-inducible ( PV APD 191ms ) ., As APD was decreased below the baseline value ( 181ms ) , inducibility initially increased and then fluctuated ., Comparing cases with equal distal APD , arrhythmia inducibility was significantly higher for APD following a ostial-distal gradient than for homogeneous APD ( p = 0 . 03 from McNemar\u2019s test ) ., PS location was also affected by PV APD ., PV PS density was low in cases of short APD , an example of which is shown in Fig 2 where reentry is no longer seen around the LA\/PV junction in the case of short APD ( 120ms ) ., This change was more noticeable for cases with homogeneous PV APD than for a gradient in APD; PV reentry was observed for the baseline case and a heterogeneous APD case , but not for a homogeneous decrease in APD ., Arrhythmia inducibility decreased with homogeneous CV slowing ( from 0 . 38 i . e . 12\/32 at 0 . 67m\/s to 0 . 03 i . e . 1\/32 at 0 . 28m\/s ) ., In the baseline model , reentry occurs close to the LA\/PV junction due to conduction block when the paced PV beat encounters a change in fiber direction at the base of the PVs , together with a longer LA APD compared to the PV APD ., In this case , the wavefront encounters a region of refractory tissue due to the longer APD in the LA ., However , when PV CV is slowed homogeneously , the wavefront takes longer to reach the LA tissue , giving the tissue enough time to recover , such that conduction block and reentry no longer occurs ., Modifying conductivity following a gradient means that , unlike the homogeneous case , the time taken for the extrastimulus wavefront to reach the LA tissue is similar to the baseline case , so the LA tissue might still be refractory and conduction block might occur ., In the case that conduction was slowest at the distal vein , the inducibility was similar to the baseline case ( see Table 2 , GA , inducibility is 0 . 38 at baseline and 0 . 34 for the cases with CV slowing ) ., Cases with greatest conduction slowing at the LA\/PV junction ( see Table 2 , GB ) exhibit an increase in inducibility ( from 0 . 38 to 0 . 53 ) when CV is decreased because of the discontinuity in conductivity at the junction ., Fig 2 shows that reentry is seen around the LA\/PV junction in cases with both baseline and slow CV , indicating that the presence of reentry at the LA\/PV junction is independent of PV CV ., PV conduction properties are also affected by PV fiber direction ., Modifications in fiber direction increased inducibility compared to the baseline fiber direction ( baseline case: 0 . 38; modified fiber direction cases 1-6: 0 . 53-0 . 63 ) ., The highest inducibility occurred with circular fibers at the ostium ( cases 1 and 4 , 0 . 63 ) , independent of fiber direction at the distal PV end ., This inducibility was reduced if the epicardial fibers were not circular at the ostium ( case 3 , 0 . 56 ) , or if fibers were spiralling ( case 2 , 0 . 56 ) instead of circular ., Next we investigated the interplay between PV properties and atrial fibrosis ., LA fibrosis properties were varied to represent interstitial fibrosis in paroxysmal and persistent AF patients , incorporating average LGE-MRI distributions 27 into the model ., These control , paroxysmal and persistent AF levels of fibrosis were then combined with PV properties varied as follows: baseline CV and APD ( 0 . 67m\/s , 181ms ) , slow CV ( 0 . 51m\/s ) , short APD ( 120ms ) , slow CV and short APD ., PS distributions in Fig 2 show that reentry occurred around the LA\/PV junction in the case of baseline PV APD for control or paroxysmal levels of fibrosis , but not for shorter PV APD ., Modifying PV CV did not affect whether LA\/PV reentry is observed ., Rotors were found to stabilise to regions of high fibrosis density in the persistent AF case ., Models with PV fibrosis had a higher inducibility compared to the baseline case ( 0 . 47 vs . 0 . 38 ) and a higher PV PS density since reentry localised there ., Fig 3 shows an example with moderate PV fibrosis ( A ) in which reentry changed from around the RIPV to the LIPV later in the simulation; adding a higher level of PV fibrosis resulted in a more stable reentry around the right PVs ( B ) ., The relationship between LA fibrosis and PV properties on driver location was investigated on an individual patient basis for four patients ., For patients for whom rotors were located away from the PVs ( Fig 4 LA1 ) , increasing model fibrosis from low to high increased the model agreement with clinical PS density 2 . 3 \u00b1 1 . 0 fold ( comparing the sensitivity of identifying clinical regions of high PS density using model PS density between the two simulations ) ., For other patients , lower levels of fibrosis were more appropriate ( 2 . 1 fold increase in agreement for lower fibrosis , Fig 4 LA2 ) , and PV isolation converted fibrillation to macroreentry in the model ., Arrhythmia inducibility showed a large variation between patient geometries ( 0 . 16\u20130 . 47 ) ., Increasing PV area increased inducibility to a different degree for each vein: right superior PV ( RSPV ) inducibility was generally high ( > 0 . 75 for all but one geometry ) independent of PV area; left superior PV ( LSPV ) inducibility increased with PV area ( Spearman\u2019s rank correlation coefficient of 0 . 36 indicating positive correlation; line of best fit gradient 0 . 27 , R2 = 0 . 3 ) ; left inferior PV ( LIPV ) and right inferior PV ( RIPV ) inducibility exhibited a threshold effect , in which veins were only inducible above a threshold area ( Fig 5A ) ., There is no clear relationship between PV length and inducibility ., PV PS density ratio increased as PV area increased ( Fig 5B , Spearman\u2019s rank correlation coefficient of 0 . 41 indicating positive correlation ) ., Fig 5C shows that rotor and wavefront trajectories depend on patient geometry , exhibiting varied importance of the PVs compared to other atrial regions ., PVI outcome was assessed for cases with varied PV APD ( both with a homogeneous change or following a gradient ) , with the inclusion of PV fibrosis and with varied PV fiber direction because these factors were found to affect the PV PS density ratio ., PVI outcome was classified into three classes depending on the activity 1 second after PVI was applied in the model: termination , meaning there was no activity; macroreentry , meaning that there was a macroreentry around the LA\/PV junctions; AF sustained by LA rotors , meaning there were drivers in the LA body ., These classes accounted for different proportions of the outcomes: termination ( 27 . 3% of cases ) , macroreentry ( 39 . 4% ) , or AF sustained by LA rotors ( 33 . 3% ) ., Calculating the PV PS density ratio before PVI for each of these classes shows that cases in which the arrhythmia either terminated or changed to a macroreentry are characterised by a statistically higher PV PS density ratio pre-PVI than cases sustained by LA rotors post-PVI ( see Fig 6 , t-test comparing termination and LA rotors shows they are significantly different , p<0 . 001; comparing macroreentry and LA rotors p = 0 . 01 ) ., High PV PS density ratio may indicate likelihood of PVI success ., In this computational modelling study , we demonstrated that the PVs can play a large role in arrhythmia maintenance and initiation , beyond being simply sources of ectopic beats ., We separated the effects of PV properties and atrial fibrosis on arrhythmia inducibility , maintenance mechanisms and the outcome of PVI , based on population or individual patient data ., PV properties affect arrhythmia susceptibility from ectopic beats; short PV APD increased arrhythmia susceptibility , while longer PV APD was found to be protective ., Arrhythmia inducibility increased with slower CV at the LA\/PV junction , but not for cases with homogeneous CV changes or slower CV at the distal PV ., The effectiveness of PVI is usually attributed to PV ectopy , but our study demonstrates that the PVs affect reentry in other ways and this may , in part , also account for success or failure of PVI ., Both PV properties and fibrosis distribution affect arrhythmia dynamics , which varies from meandering rotors to PV reentry ( in cases with baseline or long APD ) , and then to stable rotors at regions of high fibrosis density ., PS density in the PV region was high for cases with PV fibrosis ., The measurement of fibrosis and PV properties may indicate patient specific susceptibility to AF initiation and maintenance ., PV PS density before PVI was higher in cases in which AF terminated or converted to a macroreentry; thus , high PV PS density may indicate likelihood of AF termination by PVI alone ., PV repolarisation is heterogeneous in the PVs 23 , and exhibits distinct properties in AF patients , with Rostock et al . reporting a greater decrease in PV ERP than LA ERP in patients with AF , termed AF begets AF in the PVs 21 ., Jais et al . found that PV ERP is greater than LA ERP in AF patients , but this gradient is reversed in AF patients 3 ., ERP measured at the distal PV is shorter than at the LA\/PV junction during AF 5 , 22 ., Motivated by these clinical and experimental studies , we modelled a decrease in PV APD , which was applied either homogeneously , or as a gradient of decreasing APD along the length of the PV , with the shortest APD at the distal PV rim ., An initial decrease in APD increased inducibility ( Table 1 ) , which agrees with clinical findings of increased inducibility for AF patients ., Applying this change following a gradient , as observed in previous studies , led to an increased inducibility compared to a homogeneous change in APD ., Similar to Calvo et al . 34 we found that rotor location depends on PV APD ( Fig 2 ) ., Thus PV APD affects PVI outcome in two ways; on the one hand , decreasing APD increases inducibility , emphasising the importance of PVI in the case of ectopic beats; on the other hand , PV PS density decreases for cases with short PV APD , and PVI was less likely to terminate AF ., Multiple studies have measured conduction slowing in the PVs 3 , 5 , 21\u201324 ., We modelled changes in tissue conductivity either homogeneously , or as a function of distance along the PV ., Simply decreasing conductivity and thus decreasing CV , decreased inducibility ( Table 2 ) ., Kumagai et al . reported that conduction delay was longer for the distal to ostial direction 22 ., We found that modifying conductivity following a gradient , with CV decreasing towards the LA\/PV junction , resulted in an increase in inducibility in the model ., This agrees with the clinical observations of Pascale et al . 1 ., This suggests that PVI should be performed in cases in which CV decreases towards the LA\/PV junction as these cases have high inducibility ., Changes in CV may also be due to other factors , including gap junction remodelling , modified sodium conductance or changes in fiber direction 5 , 29 ., A variety of PV fiber patterns have been described in the literature and there is variability between patients ., Interestingly , all of the PV fiber directions considered in our study showed an increased inducibility compared to the baseline model ., Verheule et al . 29 documented circumferential strands that spiral around the lumen of the veins , motivating the arrangements for cases 1 and 4 in our study; Aslanidi et al . 15 reported that fibers run in a spiralling arrangement ( case 2 ) ; Ho et al . 30 measured mainly circular or spiral bundles , with longitudinal bundles ( cases 3 and 5 ) ; Hocini et al . 5 reported longitudinal fibers at the distal PV , with circumferential and a mixed chaotic fiber direction at the PV ostium ( case 6 ) ., Using current imaging technologies , PV fiber direction cannot be reliably measured in vivo ., In our study , fiber direction at the PV ostium was found to be more important than at the distal PV; the greatest inducibility was for cases with circular fibers at the ostium on both endocardial and epicardial surfaces , independent of fiber direction at the distal PV end ., Similar to modelling studies by both Coleman 35 and Aslanidi 15 , inducibility increased due to conduction block near the PVs ., PVs may be larger in AF patients compared to controls 4 , 36 , and this difference may vary between veins; Lin et al . found dilatation of the superior PVs in patients with focal AF originating from the PVs , but no difference in the dimensions of inferior PVs compared to control or to patients with focal AF from the superior vena cava or crista terminalis 37 ., We found that inducibility increased with PV area for the LSPV , LIPV and RIPV , but not for the RSPV ( see Fig 5 ) ., In addition , PV PS density ratio increased with total PV area , suggesting that PVI alone is more likely to be a successful treatment strategy in the case of larger veins ., However , Den Uijl et al . found no relation between PV dimensions and the outcome of PVI 38 ., Rotors were commonly found in areas of high surface curvature , including the LA\/PV junction and left atrial appendage ostia , which agrees with findings of Tzortzis et al . 39 ., However , there were differences in PS density between geometries , with varying importance of the LA\/PV junction ( Fig 5 ) , demonstrating the importance of modelling the geometry of an individual patient ., Myocardial tissue within the PVs is significantly fibrotic , which may lead to slow conduction and reentry 5 , 30 , 40 ., More fibrosis is found in the distal PV , with increased connective tissue deposition between myocardial cells 41 ., We modelled interstitial PV fibrosis with increasing density distally , and found that the inclusion of PV fibrosis increased PS density in the PV region of the model due to increased reentry around the LA\/PV junction and wave break in the areas of fibrosis ., This , together with the results in Fig 6 , suggests that PVI alone is more likely to be a successful in cases of high PV fibrosis ., There are multiple methodologies for modelling atrial fibrosis 25 , 42 , 43 , and the choice of method may affect this localisation ., Population based distributions of atrial fibrosis were modelled for paroxysmal and persistent patients , together with varied PV properties ., The presence of LA\/PV reentry depends on both PV properties and the presence of fibrosis; reentry is seen at the LA\/PV junction for cases with baseline PV APD , but not for short PV APD , and stabilised to areas of high fibrosis in persistent AF , for which LA\/PV reentry no longer occurred ., This suggests that rotor location depends on both fibrosis and PV properties ., This finding may explain the clinical findings of Lim et al . in which drivers are primarily located in the PV region in early AF , but AF complexity increased with increased AF duration , and drivers are also located at sites away from the PVs 6 ., During early AF , PV properties may be more important , while with increasing AF duration , there is increased atrial fibrosis in the atrial body that affects driver location ., This suggests that in cases with increased atrial fibrosis in the atrial body , ablation in addition to PVI is likely to be required ., Simulations of models with patient-specific atrial fibrosis together with varied PV properties performed in this study offer a proof of concept for using this approach in future studies ., The level of atrial fibrosis and PV properties that gave the best fit of the model PS density to the clinical PS density varied between patients ., Measurement of PV ERP and conduction properties using a lasso catheter before PVI could be used to tune the model properties , together with LGE-MRI or an electro-anatomic voltage map ., It is difficult to predict whether PVI alone is likely to be a successful treatment strategy for a patient with persistent AF 44 ., This will depend on both the susceptibility to AF from ectopic beats , together with electrical driver location , and electrical size ., Our study describes multiple factors that affect the susceptibility to AF from ectopic beats ., Measurement of PV APD , PV CV and PV size will allow prediction of the susceptibility to AF from ectopic beats ., Arrhythmia susceptibility increased in cases with short PV APD , slower CV at the LA\/PV junction and larger veins , suggesting the importance of PVI in these cases ., The likelihood that PVI terminates AF was also found to depend on driver location , assessed using PS density ., Our simulation studies suggest that high PV PS density indicates likelihood of PVI success ., Thus either measuring this clinically using non-invasive ECGi recordings , or running patient-specific simulations to estimate this value may suggest whether ablation in addition to PVI should be performed ., In a recent clinical study , Navara et al . observed AF termination during ablation near the PVs , before complete isolation , in cases where rotational and focal activity were identified close to these ablation sites 45 ., These data may support the PV PS density metric suggested in our study ., Our simulations show that PV PS density depends on PV APD , the degree of PV fibrosis and to a lesser extent on PV fiber direction ., To the best of the authors\u2019 knowledge , there are no previous studies on the relationship between fibrosis in the PVs , or PV fiber direction , and the success rate of PVI ., Measuring atrial electrogram properties , including AF cycle length , before and after ablation may indicate changes in local tissue refractoriness 46 ., PV APD can be estimated clinically by pacing to find the PV ERP; and PV fibrosis may be estimated using LGE-MRI , although this is challenging , as the tissue is thin ., PV","headings":"Introduction, Materials and methods, Results, Discussion","abstract":"Success rates for catheter ablation of persistent atrial fibrillation patients are currently low; however , there is a subset of patients for whom electrical isolation of the pulmonary veins alone is a successful treatment strategy ., It is difficult to identify these patients because there are a multitude of factors affecting arrhythmia susceptibility and maintenance , and the individual contributions of these factors are difficult to determine clinically ., We hypothesised that the combination of pulmonary vein ( PV ) electrophysiology and atrial body fibrosis determine driver location and effectiveness of pulmonary vein isolation ( PVI ) ., We used bilayer biatrial computer models based on patient geometries to investigate the effects of PV properties and atrial fibrosis on arrhythmia inducibility , maintenance mechanisms , and the outcome of PVI ., Short PV action potential duration ( APD ) increased arrhythmia susceptibility , while longer PV APD was found to be protective ., Arrhythmia inducibility increased with slower conduction velocity ( CV ) at the LA\/PV junction , but not for cases with homogeneous CV changes or slower CV at the distal PV ., Phase singularity ( PS ) density in the PV region for cases with PV fibrosis was increased ., Arrhythmia dynamics depend on both PV properties and fibrosis distribution , varying from meandering rotors to PV reentry ( in cases with baseline or long APD ) , to stable rotors at regions of high fibrosis density ., Measurement of fibrosis and PV properties may indicate patient specific susceptibility to AF initiation and maintenance ., PV PS density before PVI was higher for cases in which AF terminated or converted to a macroreentry; thus , high PV PS density may indicate likelihood of PVI success .","summary":"Atrial fibrillation is the most commonly encountered cardiac arrhythmia , affecting a significant portion of the population ., Currently , ablation is the most effective treatment but success rates are less than optimal , being 70% one-year post-treatment ., There is a large effort to find better ablation strategies to permanently cure the condition ., Pulmonary vein isolation by ablation is more or less the standard of care , but many questions remain since pulmonary vein ectopy by itself does not explain all of the clinical successes or failures ., We used computer simulations to investigate how electrophysiological properties of the pulmonary veins can affect rotor formation and maintenance in patients suffering from atrial fibrillation ., We used complex , biophysical representations of cellular electrophysiology in highly detailed geometries constructed from patient scans ., We heterogeneously varied electrophysiological and structural properties to see their effects on rotor initiation and maintenance ., Our study suggests a metric for indicating the likelihood of success of pulmonary vein isolation ., Thus either measuring this clinically , or running patient-specific simulations to estimate this metric may suggest whether ablation in addition to pulmonary vein isolation should be performed ., Our study provides motivation for a retrospective clinical study or experimental study into this metric .","keywords":"medicine and health sciences, engineering and technology, cardiovascular anatomy, fibrosis, electrophysiology, endocardium, simulation and modeling, developmental biology, epicardium, research and analysis methods, cardiology, arrhythmia, atrial fibrillation, rotors, mechanical engineering, anatomy, physiology, biology and life sciences, heart","toc":null} +{"Unnamed: 0":900,"id":"journal.pntd.0006075","year":2017,"title":"Development and preliminary evaluation of a multiplexed amplification and next generation sequencing method for viral hemorrhagic fever diagnostics","sections":"Outbreaks of viral hemorrhagic fever ( VHF ) occur in many parts of the world 1 , 2 ., VHFs are caused by various single-stranded RNA viruses , the majority of which are classified in Arenaviridae , Filoviridae , and Flaviviridae families and Bunyavirales order 3 ., Human infections show high morbidity and mortality rates , can spread easily , and require rapid responses based on comprehensive pathogen identification 1 , 3 , 4 ., However , routine diagnostic approaches are challenged when fast and simultaneous screening for different viral pathogens in higher numbers of individuals is necessary 5 ., Even PCR as a widely used diagnostic method , usually providing specific virus identification , requires intense hands-on time for parallel screening of larger quantity of specimens and provides limited genetic information about the target virus ., Multiplexing of different specific PCR assays aims at dealing with these drawbacks; however , until recently , it was limited to a few primer pairs in one reaction due to a lack of amplicon identification approaches for more than five targets 6 , 7 ., Next Generation Sequencing ( NGS ) has provided novel options for the identification of viruses , including simultaneous and unbiased screening for different pathogens and multiplexing of various samples in a single sequencing run 8 ., Furthermore , the development of real-time sequencing platforms has enabled processing and analysis of individual specimens within reasonable timeframes 9 ., However , virus identification with NGS is also accompanied by major drawbacks , such as diminished sensitivity when viral genome numbers in the sample are insufficient and masked by unbiased sequencing of all nucleic acids present in the specimen , including the host genome 10 , 11 ., Attempts to increase the sensitivity of NGS-based diagnostics have focused on enrichment of virus material and libraries before sequencing , including amplicon sequencing , PCR-generated baits , and solution-based capture techniques 12\u201314 ., The strategy of ultrahigh-multiplex PCR with subsequent NGS has previously been employed for human single nucleotide polymorphism typing , genetic variations in human cardiomyopathies , and bacterial biothreat agents 15\u201317 ., In this study , we describe the development and initial evaluation of a novel method for targeted amplification and NGS-identification of viral febrile disease and hemorrhagic fever agents and assess the feasibility of this approach in diagnostics ., The human specimens , used for the evaluation of the developed panel were obtained from adults after written informed consent and in full compliance of the local ethics board approval ( Ankara Research and Training Hospital , 13 . 07 . 11\/0426 ) ., Viruses reported to cause VHF as well as related strains , associated with febrile disease accompanied by arthritis , respiratory symptoms , or meningoencephalitis , were included in the design to enable differential diagnosis ( Table 1 ) ., For each virus strain , all genetic variants with complete or near-complete genomes deposited in GenBank ( https:\/\/www . ncbi . nlm . nih . gov\/genbank\/ ) were assembled into groups of >90% nucleotide sequence identity via the Geneious software ( version 9 . 1 . 3 ) 18 ., The consensus sequence of each group was included in the design ., The primer sequences were deduced using the Ion AmpliSeq Designer online tool ( https:\/\/ampliseq . com\/browse . action ) which provides a custom multiplex primer pool design for NGS ( Thermo Fisher Scientific , Waltham , MA ) ., For initial evaluation of the approach and as internal controls , human-pathogenic viruses belonging in identical and\/or distinct families\/genera but not associated with hemorrhagic fever or febrile disease were included in the design ( Table 1 ) ., The designed primers were tested in silico for specific binding to the target virus strains , including all known genotypes and genetic variants ., The primer sets were aligned to their specific target reference sequences and relative primer orientation , amplicon size and overlap , and total mismatches for each primer were evaluated using the Geneious software 18 ., Pairs targeting a specific virus with less than two mismatches in sense and antisense primers were defined as a hit and employed for sensitivity calculations ., Unspecific binding of each primer to non-viral targets was investigated via the BLASTn algorithm , implemented within the National Center for Biotechnology Information website ( https:\/\/blast . ncbi . nlm . nih . gov\/Blast . cgi ) 19 ., The sensitivity and specificity of the primer panel for each virus were determined via standard methods as described previously 20 ., The performance of the novel panel for the detection of major VHF agents was evaluated via selected virus strains ., For this purpose , nucleic acids from Yellow fever virus ( YFV ) strain 17D , Rift Valley fever virus ( RVFV ) strain MP-12 , Crimean-Congo hemorrhagic fever virus ( CCHFV ) strain UCCR4401 , Zaire Ebola virus ( EBOV ) strain Makona-G367 , Chikungunya virus ( CHIKV ) strain LR2006-OPY1 and Junin mammarenavirus ( JUNV ) strain P3766 were extracted with the QIAamp Viral RNA Mini Kit ( Qiagen , Hilden , Germany ) with subsequent cDNA synthesis according to the SuperScript IV Reverse Transcriptase protocol ( Thermo Fisher Scientific ) ., Genome concentration of all strains was determined by specific quantitative real-time PCRs using plasmid-derived virus standards , as described previously ( protocols are available upon request ) ., Genome equivalents ( ge ) of 100\u2013103 for each virus were prepared and mixed with 10 ng of human genetic material recovered from HeLa cells ., In order to compare the efficiency of amplification with the novel panel versus direct NGS , all virus cDNAs were further subjected to second strand cDNA synthesis using the NEBNext RNA Second Strand Synthesis Module ( New England BioLabs GmbH , Frankfurt , Germany ) according to the manufacturer\u2019s instructions ., Reagent-only mixes and HeLa cell extracts were employed as negative controls in the experiments ., The performance of the panel was further tested on clinical specimens from individuals with a clinical and laboratory diagnosis of VHF 21 ., For this purpose , previously stored sera with quantifiable CCHFV RNA and lacking IgM or IgG antibodies were employed and processed via High Pure Viral Nucleic Acid Kit ( Roche , Mannheim , Germany ) and the SuperScript IV Reverse Transcriptase ( Thermo Fisher Scientific ) protocols , as suggested by the manufacturer ., Two human sera , without detectable nucleic acids of the targeted viral strains were tested in parallel as negative controls ., The specimens were amplified using the custom primer panels designed for HFVs with the following PCR conditions for each pool: 2 \u03bcl of viral cDNA mixed with human genetic material , 5 \u03bcl of primer pool , 0 . 5 mM dNTP ( Invitrogen , Karlsruhe , Germany ) , 5 \u03bcl of 10 x Platinum Taq buffer , 4 mM MgCl2 , and 10 U Platinum Taq polymerase ( Invitrogen ) with added water to a final volume of 25 \u03bcl ., Cycling conditions were 94\u00b0C for 7 minutes , 45 amplification cycles at 94\u00b0C for 20 seconds , 60\u00b0C for 1 minute , and 72\u00b0C for 20 seconds , and a final extension step for 6 minutes ( at 72\u00b0C ) ., Thermal cycling was performed in an Eppendorf Mastercycler Pro ( Eppendorf Vertrieb Deutschland , Wesseling-Berzdorf , Germany ) with a total runtime of 90 minutes ., The amplicons obtained from the virus strains were subjected to the Ion Torrent Personal Genome Machine ( PGM ) System for NGS analysis ( Thermo Fisher Scientific Inc . ) ., Initially , the specimens were purified with an equal volume of Agencourt AMPure XP Reagent ( Beckman Coulter , Krefeld , Germany ) ., PGM libraries were prepared according to the Ion Xpress Plus gDNA Fragment Library Kit , using the \u201cAmplicon Libraries without Fragmentation\u201d protocol ( Thermo Fisher Scientific ) ., For direct NGS , specimens were fragmented with the Ion Shear Plus Reagents Kit ( Thermo Fisher Scientific ) with a reaction time of 8 minutes ., Subsequently , libraries were prepared using the Ion Xpress Plus gDNA Fragment Library Preparation kit and associated protocol ( Thermo Fisher Scientific ) ., All libraries were quality checked using the Agilent Bioanalyzer ( Agilent Technologies , Frankfurt , Germany ) , quantitated with the Ion Library Quantitation Kit ( Thermo Fisher Scientific ) , and pooled equimolarly ., Enriched , template-positive Ion PGM Hi-Q Ion Sphere Particles were prepared using the Ion PGM Hi-Q Template protocol with the Ion PGM Hi-Q OT2 400 Kit ( Thermo Fisher Scientific ) ., Sequencing was performed with the Ion PGM Hi-Q Sequencing protocol , using a 318 chip ., Amplicons obtained from CCHFV-infected individuals and controls were processed for nanopore sequencing via MinION ( Oxford Nanopore Technologies , Oxford , United Kingdom ) ., The libraries were prepared using the ligation sequencing kit 1D , SQK-LSK108 , R9 . 4 ( Oxford Nanopore Technologies ) ., Subsequently , the libraries were loaded on Oxford Nanopore MinION SpotON Flow Cells Mk I , R9 . 4 ( Oxford Nanopore Technologies ) using the library loading beads and run until initial viral reads were detected ., The sequences generated by PGM sequencing were trimmed to remove adaptors from each end using Trimmomatic 22 , and reads shorter than 50 base pairs were discarded ., All remaining reads were mapped against the viral reference database prepared during the design process via Geneious 9 . 1 . 3 software 18 ., During and after MinION sequencing , all basecalled reads in fast5 format were extracted in fasta format using Poretools software 23 ., The BLASTn algorithm was employed for sequence similarity searches in the public databases when required ., The AmpliSeq design for the custom multiplex primer panel resulted in two pools of 285 and 256 primer pairs for the identification of 46 virus species causing hemorrhagic fevers , encompassing 6 , 130 genetic variants of the strains involved ., All amplicons were designed to be within a range of 125\u2013375 base pairs ., Melting temperature values of the primers ranged from 55 . 3\u00b0C to 65 . 0\u00b0C ., No amplicons <1 , 000 base pairs with primer pairs in relative orientation and distance to each other could be identified , leading to an overall specificity of 100% for all virus species ., The primer sequences in the panels are provided in S1 Table ., The overall sensitivity of the panel reached 97 . 9% , with the primer pairs targeting 6 , 007 out of 6 , 130 genetic variants ( 1 mismatch in one or both of each primers of a primer pair accepted , as described above ) ( Fig 1 ) ., Impaired sensitivity was noted for Hantaan virus ( 0 . 05 ) ., Evaluation of all Hantaan virus variants in GenBank revealed that newly added virus sequences were divergent by up to 17% from sequences included in the panel design , leading to diminished primer binding ., These sequences could be fully covered by two sets of additional primers ., Amplification of viral targets with the multiplex PCR panel prior to NGS resulted in a significant increase of viral read numbers compared to direct NGS ( Figs 2 and 3 , S2 Table ) ., In specimens with 103 ge of the target strain , the ratio of viral reads to unspecific background increased from 1\u00d710\u22123 to 0 . 25 ( CCHFV ) , 3\u00d710\u22125 to 0 . 34 ( RVFV ) , 1\u00d710\u22124 to 0 . 27 ( EBOV ) , and 2\u00d710\u22125 to 0 . 64 ( CHIKV ) with fold-changes of 247 , 10 , 297 , 1 , 633 , and 25 , 398 , respectively ., In direct NGS , no viral reads could be detected for CCHFV and CHIKV genomic concentrations lower than 103 , and this approach failed to identify YFV and JUNV regardless of the initial virus count ., In targeted NGS , the limit of detection was noted as 100 ge for YFV , CCHFV , RVFV , EBOV , and CHIKV and 101 ge for JUNV ., For the viruses detectable via direct NGS , amplification provided significant increases in specific viral reads over total reads ratios , from 10\u22124 to 0 . 19 ( CCHFV , 1 , 900-fold change ) , 2\u00d710\u22125 to 0 . 19 ( RVFV , 9 , 500-fold change ) , and 3\u00d710\u22124 to 0 . 56 ( EBOV , 1 , 866-fold change ) ., The average duration of the workflow of direct and targeted NGS via PGM was 19 and 20 . 5 hours , respectively ., In all patient sera evaluated via nanopore sequencing following amplification , the causative agent could be detected after 1 to 9 minutes of the NGS run ( Table 2 ) ., The characterized sequences were 89\u201399% identical to the CCHFV strain Kelkit L segment ( GenBank accession: GQ337055 ) known to be in circulation in Turkey 24 , 25 ., No targeted viral sequence could be observed in human sera used as negative controls during 1 hour of sequencing ., The preparation , amplification , and sequencing steps of the clinical specimens could be completed with a total sample-to-result time of less than 3 . 5 hours ., In this study , we report the development and evaluation of an ultrahigh-multiplex PCR for the enrichment of viral targets before NGS , which aims to provide a robust molecular diagnosis in VHFs ., The panel was observed to be highly specific and sensitive and to have the capacity to detect over 97% of all known genetic variants of the targeted 46 viral species in silico ., The sensitivity of the primer panel was impaired by virus sequences not included in the original design , as noted for Hantaan virus in this study ., As 36 out of a total of 59 isolates have been published after panel design was completed , these genetic variants of Hantaan virus could not be detected with a comparable sensitivity or not at all with the current panel ., This indicates that the panel has to be adapted to newly-available sequences in public databases ., We have evaluated how the panel could be updated to accommodate these recently-added sequences and observed that two additional primer pairs could sufficiently cover all divergent entries ., Although the approach for the panel design as well as the actual design with the AmpliSeq pipeline was successful for all genetic variants included , the amplification of viral sequences significantly diverging from the panel could not be guaranteed , which may also apply for novel viruses ., Unlike other pathogenic microorganisms , viruses can be highly variable in their genome ., Only rarely do they share genes among all viruses or virus species that could be targeted as a virus-generic marker by amplification ., Our strategy for primer design and the AmpliSeq pipeline do not permit the generation of degenerated primers or the targeting of very specific consensus sequences ., However , the design of the primer panel is relatively flexible , and additional primer pairs can be appended in response to recently published virus genomes ., Moreover , an updated panel will also encompass non-viral pathogens relevant for differential diagnosis , and syndrome-specific panels targeting only VHF agents or virally induced febrile diseases such as West Nile fever and Chikungunya can be developed ., We have further tested the panel using quantitated nucleic acids of six well-characterized viruses responsible for VHF or severe febrile disease , with a background of human genetic material to simulate specimens likely to be submitted for diagnosis , using the semiconductor PGM sequencing platform ., The impact of amplification was evaluated with a comparison of direct and amplicon-based NGS runs ., Overall , targeted amplification prior to NGS ensured viral read detection in specimens with the lowest virus concentration ( 1 ge ) in five of the six viruses evaluated and 10 ge in the remaining strain , which is within the range of the established real-time PCR assays ., Furthermore , this approach enabled significant increases in specific viral reads over background in all of the viruses , with varying fold changes in different strains and concentrations ( Figs 2 and 3 ) ., The increased sensitivity and specificity provided with the targeted amplification suggest that it can be directly employed for the investigation of suspected VHF cases where viremia is usually short and the time point of maximum virus load is often missed 1 , 5 ., Finally , we evaluated the VHF panel by using serum specimens obtained during the acute phase of CCHFV-induced disease and employed an alternate NGS platform based on nanopore sequencing ., This approach enabled virus detection and characterization within 10 minutes of the NGS run and can be completed in less than 3 . 5 hours in total ( Table 2 ) ., The impact of the nanopore sequencing has been revealed previously , during the EBOV outbreak in West Africa where the system provided an efficient method for real-time genomic surveillance of the causative agent in a resource-limited setting 26 ., Field-forward protocols based on nanopore sequencing have also been developed recently for pathogen screening in arthropods 27 ., Specimen processing time is likely to be further reduced via the recently developed rapid library preparation options ., While the duration of the workflow is longer , the PGM and similar platforms are well-suited for the parallel investigation of higher specimen numbers ., Although we have demonstrated in this study that targeted amplification and NGS-based characterization of VHF and febrile disease agents is an applicable strategy for diagnosis and surveillance , there are also limitations of this approach ., In addition to the requirement of primer sequence updates , the majority of the workflow requires non-standard equipment and well-trained personnel , usually out of reach for the majority of laboratories in underprivileged geographical regions mainly affected by these diseases ., However , NGS technologies are becoming widely available with reduced total costs and can be swiftly transported and set up in temporary facilities in field conditions 26 , 27 ., During outbreak investigations , where it is impractical and expensive to test for several individual agents via specific PCRs , this approach can easily provide information on the causative agent , facilitating timely implementation of containment and control measures ., Additional validation of the approach will be provided with the evaluation of well-characterized clinical specimen panels and direct comparisons with established diagnostic assays ., In conclusion , virus enrichment via targeted amplification followed by NGS is an applicable method for the diagnosis of VHFs which can be adapted for high-throughput or nanopore sequencing platforms and employed for surveillance or outbreak monitoring .","headings":"Introduction, Methods, Results, Discussion","abstract":"We describe the development and evaluation of a novel method for targeted amplification and Next Generation Sequencing ( NGS ) -based identification of viral hemorrhagic fever ( VHF ) agents and assess the feasibility of this approach in diagnostics ., An ultrahigh-multiplex panel was designed with primers to amplify all known variants of VHF-associated viruses and relevant controls ., The performance of the panel was evaluated via serially quantified nucleic acids from Yellow fever virus , Rift Valley fever virus , Crimean-Congo hemorrhagic fever ( CCHF ) virus , Ebola virus , Junin virus and Chikungunya virus in a semiconductor-based sequencing platform ., A comparison of direct NGS and targeted amplification-NGS was performed ., The panel was further tested via a real-time nanopore sequencing-based platform , using clinical specimens from CCHF patients ., The multiplex primer panel comprises two pools of 285 and 256 primer pairs for the identification of 46 virus species causing hemorrhagic fevers , encompassing 6 , 130 genetic variants of the strains involved ., In silico validation revealed that the panel detected over 97% of all known genetic variants of the targeted virus species ., High levels of specificity and sensitivity were observed for the tested virus strains ., Targeted amplification ensured viral read detection in specimens with the lowest virus concentration ( 1\u201310 genome equivalents ) and enabled significant increases in specific reads over background for all viruses investigated ., In clinical specimens , the panel enabled detection of the causative agent and its characterization within 10 minutes of sequencing , with sample-to-result time of less than 3 . 5 hours ., Virus enrichment via targeted amplification followed by NGS is an applicable strategy for the diagnosis of VHFs which can be adapted for high-throughput or nanopore sequencing platforms and employed for surveillance or outbreak monitoring .","summary":"Viral hemorrhagic fever is a severe and potentially lethal disease , characterized by fever , malaise , vomiting , mucosal and gastrointestinal bleeding , and hypotension , in which multiple organ systems are affected ., Due to modern transportation and global trade , outbreaks of viral hemorrhagic fevers have the potential to spread rapidly and affect a significant number of susceptible individuals ., Thus , urgent and robust diagnostics with an identification of the causative virus is crucial ., However , this is challenged by the number and diversity of the viruses associated with hemorrhagic fever ., Several viruses classified in Arenaviridae , Filoviridae , and Flaviviridae families and Bunyavirales order may cause symptoms of febrile disease with hemorrhagic symptoms ., We have developed and evaluated a novel method that can potentially identify all viruses and their genomic variants known to cause hemorrhagic fever in humans ., The method relies on selected amplification of the target viral nucleic acids and subsequent high throughput sequencing technology for strain identification ., Computer-based evaluations have revealed very high sensitivity and specificity , provided that the primer design is kept updated ., Laboratory tests using several standard hemorrhagic virus strains and patient specimens have demonstrated excellent suitability of the assay in various sequencing platforms , which can achieve a definitive diagnosis in less than 3 . 5 hours .","keywords":"sequencing techniques, medicine and health sciences, rift valley fever virus, pathology and laboratory medicine, togaviruses, pathogens, tropical diseases, microbiology, alphaviruses, viruses, next-generation sequencing, chikungunya virus, rna viruses, genome analysis, neglected tropical diseases, molecular biology techniques, microbial genetics, bunyaviruses, microbial genomics, research and analysis methods, viral hemorrhagic fevers, infectious diseases, viral genomics, genomics, crimean-congo hemorrhagic fever virus, medical microbiology, microbial pathogens, molecular biology, virology, viral pathogens, transcriptome analysis, genetics, biology and life sciences, viral diseases, computational biology, dna sequencing, hemorrhagic fever viruses, organisms","toc":null} +{"Unnamed: 0":1279,"id":"journal.pcbi.1007284","year":2019,"title":"Fast and near-optimal monitoring for healthcare acquired infection outbreaks","sections":"Since the time of Hippocrates , the \u201cfather of western medicine\u201d , a central tenet of medical care has been to \u201cdo no harm . \u201d, Unfortunately , the scourge of healthcare acquired infections ( HAI ) challenges the medical system to honor this tenet ., When patients are hospitalized they are seeking care and healing , however , they are simultaneously being exposed to risky infections from others in the hospital , and in their weakened state are much more susceptible to these infections than they would be normally ., Acquiring these infections increases the chances of either dying or becoming even sicker , which also lengthens the time the patient needs to stay in the hospital ( increasing costs ) ., These infections can range from pneumonia and gastro-intestinal infections like Clostridium difficile to surgical site infections and catheter associated infections , which puts nearly any patient in the hospital at risk ., Antibiotic treatments intended to aid in recovery from one infection , may open the door for increased risk of infection from another ., Healthcare acquired infections are a significant problem in the United States and around the world ., Some estimates put the annual cost between 28 and 45 billion US dollars per year in the US 1 ., More importantly , they inflict a significant burden on human health ., A recent study estimated more than 2 . 5 million new cases per year in Europe alone , inflicting a loss of just over 500 disability-adjusted life years ( DALYS ) per 100 , 000 population 2 ., Given their burden and cost , their prevention is a high priority for infection control specialists ., A simple approach to monitor HAI outbreaks would be to test every patient and staff in the hospital and swab every possible location for HAI infection ., However , such a naive process is too expensive to implement ., A better strategy is required to efficiently monitor HAI outbreaks ., A recent review article 3 included 29 hospital outbreak detection algorithms described in the literature ., They found these fall into five main categories: simple thresholds , statistical process control , scan statistics , traditional statistical models , and data mining methods ., Comparing the performance of these methods is challenging given the myriad diseases , definitions of outbreaks , study environments , and ultimately the purpose of the studies themselves ., However , the authors identify that few of these studies were able to leverage important covariates in their detection algorithms ., For example , including the culture site or antibiotic resistance was shown to boost detectability ., Past simulation based approaches 4 tackle optimal surveillance system design , by choosing clinics as sensors , to increase sensitivity and time to detection for outbreaks in a population ., In contrast , our approach selects most vulnerable people and locations to infections as sensors to detect outbreaks in a hospital setting ., Different kinds of mechanistic models have also been used for studying HAI spread 5 , 6 , 7 , 8 ., Most of these are differential equation based models ., We refer to 9 for a review of mechanistic models of HAI transmission ., On a broader level , sensor selection problem for propagation ( of contents , disease , rumors and so on ) over networks has gained much attention in the data mining community ., Traditional sensor selection approaches 10 , 11 typically select a set of nodes which require constant monitoring ., Instead , in this paper , we select sensor set as well as the rate to monitor each sensor ., Hence , our approach is novel from the data mining perspective as well ., Recently Shao et al . 12 proposed selecting a set of users on social media to detect outbreaks in the general population ., Similarly , Reis et al . 13 proposed an epidemiological network modeling approach for respiratory and gastrointestinal disease outbreaks ., Other closely related data mining problems include selecting nodes for inhibiting epidemic outbreaks ( vaccination ) 14 , 15 , 16 and inferring missing infections in an epidemic outbreak 17 ., We employ a simulation and data optimization based approach to design our algorithm and to provide robust bounds on its performance ., Additionally , our simulation model is richly detailed in terms of the class of individuals and locations where sampling can occur ., None of the prior works explicitly model the multiple pathways of infections for HAI outbreaks and fail in separating the location contamination and infections in people ., We formalize the sensor set problem as an optimization problem over the space of rate vectors , which represent the rates at which to monitor each location and person ., We consider two objectives , namely the probability of detection and the detection time , and show that the prior satisfies a mathematical property called submodularity , which enables efficient algorithms ., In addition , we leverage data generated from a carefully calibrated simulation using real data collected from a local hospital ., Our extensive experiments show that our approach outperforms the state-of-the-art general outbreak detection algorithm ., We also show that our approach achieves the minimum outbreak detection time compared to other alternatives ., To the best of our knowledge , we are the first to provide a principled data-driven optimization based approach for HAI outbreak detection ., Though we validate our approach for a specific HAI , namely C . difficile , our general approach is applicable for other HAIs with similar disease model as well ., As previously mentioned , we propose a data-driven approach in selecting the sensors ., There are multiple challenges in obtaining actual HAI spread data such as high cost , data sparsity , and the ability to safeguard patient personal information ., For this reason , we rely on simulated HAI contagion data ., We use a highly-detailed agent-based simulation that employs a mobility log obtained from local hospitals 18 , 19 to produce realistic contagion data ., All the steps of this methodology are described in detail in 18 , and we summarize them below for completeness ., Fig 1 shows a visualization of simulated HAI spread ., In this simulation , people ( human agents ) move across various locations ( static agents ) as defined by the mobility log and spread HAI in stochastic manner ., The simulation was developed in the following three steps: design of an in-silico or computer-based population and its activities , conceptualization of a disease model for a pathogen of interest , and the employment of a highly-detailed simulation ., The following sections describe the data creation process in more detail ., Recall that our goal is to select a set of agents as sensors , and the rate at which each such sensor should be monitored , such that future HAI outbreaks are detected with high probability , and as early as possible ., However , these have to be selected within given resource constraints ., We start with a formalization of these problems ., Finding a minimum cost sensor set is a challenging optimization problem , and we present efficient algorithms by using the notion of submodularity ., We first define some notations ., Let bold letters represent vectors ., Let P and L denote the sets of human agents and locations respectively; let n = |P \u222a L|\u2014this will be the total number of agents in our simulations ., Let B denote the budget on a number of samples that is permitted ( weighted by cost of agents ) , i . e , it is the sum of expected number of swabs to detect whether a location is contaminated or a human is infected ., As mentioned earlier , the mobility logs are represented as a bipartite temporal network G ( P , L , E , T ) , with two partitions P and L representing agents , E representing who-visits-what-location relationship and T representing time\/duration of the visit ., We consider each agent to be a node in the temporal network G . Hence we use the terms node and agent interchangeably ., Now , let c \u2208 Rn , be the vector of costs , i . e . , cv is the cost of monitoring node v . Let r \u2208 Rn be the vector of monitoring rates , where rv denotes that the probability that node v is monitored ( e . g . , swabbed ) each day ., Finally , let Tmax denote the maximum time in each simulation instance ., Unfortunately , Problems 1 and 2 are both computationally very challenging ., In fact , both the problems can be proven to be NP-hard ., Lemma 1 . Problem 1 is NP-hard ., Lemma 2 . Problem 2 is NP-hard ., We provide the proofs for both the lemmas in the supplementary , where we show that the NP-Complete SetCover problem can be viewed as a special case of both the Problems 1 and 2 . Since our problems are in the computational class NP-hard , they cannot be solved optimally in polynomial time even for simplistic instances , unless P = NP ., The instances we need to consider are pretty large , so a naive exhaustive search for the optimal solution is also not feasible and will be too slow ., Therefore , we focus on designing efficient near-optimal approximate solutions ., We begin with Problem 1 . The function we are trying to optimize for problem 1 is defined over a discrete lattice , i . e . the rate vector r ., Our approach is to show that this function is a submodular lattice function ., The notion of submodularity , which is typically defined over set functions , can be extended to discrete lattice functions ( e . g . recenty in 32 ) ., Informally , submodularity means that the objective value has a property of diminishing returns for a small increase in the rate in any dimension ., It is important to note that submodularity for lattice functions is more nuanced than for simple set functions ( we define it formally in the Supplementary Information section ) ., Fortunately , it turns out that this property implies that a natural greedy algorithm ( which maximizes the objective marginally at each step ) gurantees a ( 1 \u2212 1\/e ) -approximation to the optimal solution ., Without such a property , it is not clear how to solve Problem 1 efficiently even for a small budget ., We have the following lemma ., Lemma 3 ., The objective in Problem 1 is a submodular lattice function ., The detailed description of the submodularity property and proof of lemma 3 are presented in the supplementary ., Our HaiDetect algorithm for Problem 1 selects the sensors to be monitored and rates such that nodes which tend to get infected across multiple simulation instances have higher infection rates ., Specifically , at each step , HaiDetect selects the node v and the rate r among all possible candidate pairs of nodes and rates , such that the average marginal gain is maximized ., HaiDetect keeps adding nodes and\/or increasing the rates to monitor the selected nodes until the weighted sum of the rates is equal to the budget B . The detailed pseudocode is presented in Algorithm 1 . Algorithm 1 HaiDetect Require: I , budget B 1: for each feasible initial vector r0 do 2: Initialize the rate vector r = r0 3: while \u2211v rv \u22c5 cv < B do 4: Find a node v and rate r maximizing average marginal gain 5: Let rv = r 6: Remove all candidate pairs of nodes and rates which are not feasible 7: Return the best rate vector r HaiDetect has desirable properties in terms of both effectiveness and speed ., The performance guarantee of HaiDetect is given by the following lemma ., Lemma 4 ., HaiDetect gives a ( 1-1\/e ) approximation to the optimal solution ., The lemma above gives an offline bound on the performance of HaiDetect , i . e . , we can state that the ( 1-1\/e ) approximation holds even before the computation starts ., We can actually obtain a tighter bound by computing an empirical online bound ( once the solution is obtained ) which can be derived using the submodularity and monotonicity of Problem 1 . For us to state the empirical bound , let us define some notations ., Let the solution selected by HaiDetect for a budget B be r ^ ., Similarly , let the optimal vector for the same budget be r* ., For simplicity , let the objective function in Problem 1 be R ( \u22c5 ) ., For all nodes v and for a \u2208 0 , 1 , let us define \u0394v as follows:, \u0394 v = max a R ( r ^ \u2228 a \u00b7 \u03c7 { v } ) - R ( r ^ ) ( 8 ) Similarly let us define \u03c3v as the argument which maximizes \u0394v \u03c3 v = arg max a R ( r ^ \u2228 a \u00b7 \u03c7 { v } ) - R ( r ^ ) ( 9 ) Now , let \u03b4 v = \u0394 v c v \u00b7 \u03c3 v . Note that for each node v , there is a single \u03b4 ., Let the sequence of nodes s1 , s2 , \u2026 , sn be ordered in decreasing order of \u03b4v ., Now let K be the index such that \u03b8 = \u2211 i = 1 K - 1 c s i \u03c3 s i \u2264 B and \u2211 i = 1 K c s i \u03c3 s i > B . Now the following lemma can be stated ., Lemma 5 . The online bound on R ( r* ) in terms of the current rate r ^ assigned by HaiDetect is as follows:, R ( r * ) \u2264 R ( r ^ ) + \u2211 i = 1 K - 1 \u0394 s i + B - \u03b8 c s K \u03c3 s K \u0394 s K The lemma above allows us to compute how far the solution given by HaiDetect is from the optimal ., We compute this bound and explore the results in detail in the Results section ., In addition to the performance guarantee , HaiDetect\u2019s running time complexity is as follows ., Lemma 6 . The running time complexity of HaiDetect is O ( c \u22c5 B2 ( |P| + |L| ) ) , where c is the number of unique initial vectors r0 , B is the budge , P is the set of human agents and L is the set of locations ., Note that the constant c is much smaller than the total population , i . e . , c << |P| + |L| in our case as infections are sparse and we do not need to consider agents and locations which never get infected ., The most expensive computational step in Algorithm 1 is the estimation of the node v and rate r that gives the maximum average marginal gain ( Step ( i ) of 1, ( b ) ) ., This can be expedited using lazy evaluations and memoization ., Hence , the algorithm is also quite fast in practice ., Moreover , it also embarrassingly parallelizable ., The steps, ( a ) and, ( b ) for each initial vector can be performed in parallel ., We also propose a similar algorithm HaiEarlyDetect for Problem 2 . The main idea here is that we assign higher rates to nodes which tend to get infected earlier in many simulation instances ., The pseudocode for HaiEarlyDetect is presented in Algorithm 2 . Algorithm 2 HaiEarlyDetect Require: I , budget B 1: for each feasible initial vector r0 do 2: Initialize the rate vector r = r0 3: while \u2211v rv \u22c5 cv < B do 4: Find a node v and rate r minimizing the average detection time 5: Let rv = r 6: Remove all candidate pairs of nodes and rates which are not feasible 7: Return the best rate vector r As shown in Algorithm 2 , HaiEarlyDetect optimizes the marginal gain in the objective in Problem 2 in each iteration ., It turns out that the objective in Problem 2 is not submodular ., However , as shown by our empirical results , the greedy approach we propose works very well in practice and outperforms the baselines ., Moreover , it too runs fast in practice as the same optimization techniques discussed earlier for HaiDetect applies to HaiEarlyDetect as well ., In the previous section , we discussed two types of bounds on the performance of HaiDetect ., Here we show how far the solution given by HaiDetect is from the optimal value for various budgets ., For this experiment , we ran HaiDetect on a set of 100 simulations and computed the value of the objective in Problem 1 for the resulting rate vector ., We also computed the overall bound , based on ( 1 \u2212 1\/e ) approximation and the empirical bound as per Lemma 5 ., Since the objective value cannot exceed the number of simulations , we also compute the lowest bound as the minimum of two bounds and the number of simulations ., We repeat the experiment for budget size from 1 to 50 ., The resulting plot is presented in Fig 5 ., Fig 5 highlights several interesting aspects ., First of all , we can see that the online bound is always tighter than the offline bound ., Moreover , we also observe that as the performance of HaiDetect reaches close to the optimal ( with increase in budget ) , the online bound becomes more and more tight until both the performance and bound are equal , indicating the values of the budget for which HaiDetect solves Problem 1 optimally ., This results demonstrates that HaiDetect can accurately find sensors which can detect any observed outbreaks given sufficient budget ., Given that HaiDetect is near-optimal for the observed outbreaks , we evaluate its effectiveness in detecting unseen ( \u201cfuture\u201d ) outbreaks ., Here , we compare the performance of HaiDetect and Celf with respect to the budget on \u201cunobserved\u201d simulations ., For this experiment , we performed a 5-fold cross validation on 200 simulations ., Specifically , we divided the simulations into 5 groups , and at each turn we selected the sensors in the first four groups and computed the sum of outbreak detection probability as shown in Eq 1 in the fifth group ( the test set ) ., Then we normalize the resulting sum of outbreak detection probability by total number of simulation instances in the the same group ., The normalized value can be intuitively described as the average probability of detecting a future outbreak ., We repeat this process five times ensuring each group is used for success evaluation ., We then compute the overall average and its standard error ., We repeat the entire process for the budgets from 1 to 50 ., The result of our experiment is show in Fig 6 ., The first observation is that HaiDetect consistently outperforms Celf for all values of the budget ., The disparity between the methods is more apparent for larger values of budget ., The difference in quality of the sensors can be explained by the fact that Celf only assigns rate of 0 or 1 ., However , HaiDetect can strategically assign non-integer rates so as to maximize the likelihood of detection ., We can also observe that the standard error for the HaiDetect decreases and is negligible for larger budgets ., However , it is not the case for Celf ., This shows that not only the quality of sensors detected by HaiDetect is better , but it is more stable as well ., Finally , we see that probability of an outbreak being detected by sensors selected by HaiDetect is 0 . 96 when budget is equal to 50 , whereas it is only around 0 . 75 for Celf ., Similarly , a budget of only 25 is required to detect an outbreak with probability of 0 . 8 for HaiDetect ., For the same budget , sensors selected by Celf detect cascades with probability of 0 . 55 ., The result highlights that HaiDetect produces more reliable monitoring strategy for HAI outbreak detection ., Here , we investigate the change in performance of HaiDetect and Celf as the number of simulations used to detect the sensor increases ., For this experiment , we used 150 distinct simulations ., We divided the simulations into two categories , training and testing sets ., We used the cascades in the training set to select the sensors and used the ones in the testing set to measure quality ., First we decided on a budget of 10 and training size of 10 cascades ., We ran both HaiDetect and Celf for this setting and measured the quality using the cascades in the testing set ., We then increased the training size by 10 till we reached the size of 100 ., We repeated the same procedure for budgets of 30 and 50 ., We compute the average probability of detection in the same manner as described above ., Fig 7 summarizes the result ., We can observe that HaiDetect outperforms Celf consistently ., It reinforces the previous observation that HaiDetect selects good sensors for the HAI outbreak detection ., An interesting observation is that the performance tails off after training size of 20 for larger budgets , which implies that not many cascades have to be observed before we can select good quality sensors ., This is an encouraging finding as gathering large number of real cascades of HAI spread is not feasible ., Next we study the change in performance of HaiDetect with the training size for various budget sizes ., Here we tracked the performance of HaiDetect for budgets of 10 , 30 , and 50 for training sets of various size ., The result is summarized in Fig 8 ., As shown in the figure , the difference between peformance of HaiDetect for budgets 30 and 10 is much larger than that for budgets 50 and 30 ., The normalized objective , or the probability of detection , is close to 1 at budget 50 , indicating that monitoring sensors at rates assigned by HaiDetect detects almost all the HAI outbreaks ., Hence , in expectation , roughly 50 swabs a day is enough to monitor an outbreak in a hospital wing ., Again , we observe that performance of HaiDetect tails off after the training size of 20 ., It provides extra validation for the observation that a limited number of observed cascades are enough to select high quality sensors ., A desirable property of sensors is that they aid in early detection of outbreaks ., Here we study the average detection time of future outbreaks using the sensors and rates selected by HaiEarlyDetect ., In this experiment , we first divided our simulations into equally sized training and testing sets , each having 100 simulations ., We ran HaiEarlyDetect on the training set to detect sensors and rates at which to monitor them ., Then , we monitored the selected sensors at the inferred rates and measured the detection time for each simulation in the testing set ., We repeated the entire process for various budgets ., The detection time averaged over 100 simulated outbreaks in the testing set is summarized in Fig 9 and the variance in the detection time is shown in Fig 10 ., As shown in the figure , as the budget increases the average detection time decreases ., According to our results , the average time to detect an outbreak in the testing set while monitoring sensors selected for budget of 1000 is roughly six days ., This is impressive considering the fact that monitoring all agents results in detection time of 4 days monitoring all of more than 1200 nurses results in detection time of 8 days ., Hence , monitoring these sensors detect the outbreak earlier with fewer budget ., Another advantage of our sensors is that they are diverse ., Significant proportion of the selected sensors include patients and fomites , which are easier to monitor than the nurses ., Hence , monitoring the sensors selected by HaiEarlyDetect also has an economic advantage ., An interesting observation seen in Fig 10 is that the variablity in average detection time decreases with the increase in budget ., Hence , we expect the performance of our sensors to be fairly consistent in detecting future outbreaks for larger budgets ., Moreover , the median time to detect an outbreak ( as shown by the box plots ) is always less than the average ., Hence , we expect that performance of HaiEarlyDetect to be generally better than that suggested by the average detection time ., For budget of 1000 , the median detection time is just 5 days ., Note that monitoring all agents results in detection time of 4 days ., This implies that in practice our approach requires only 1000 swabs per day to detect an outbreak within a single day of the first infection ., An interesting question is how many potential cases can be prevented by monitoring the sensors selected by HaiEarlyDetect ., Here we study how many nodes get infected before an outbreak is detected and how many potential infections can be prevented by monitoring our sensors for various budgets ., As in the previous experiment , for a given budget , we leverage 100 simulations to select sensors and their monitoring rates ., Once the sensors are selected , we count the number of infections that occur in a test simulation before a sensor is infected and how many further infections occur following the infections of sensors ., We then average these numbers over 100 test simulations ., The results are summarized in Table 2 ., As shown in Table 2 , for the budget of 10 samples\/swabs , 4 . 31 potential future infections could prevented ., Note that there are only 23 infections on average per simulation ., For the budget of only 200 , 15 . 02 infections could be prevented , which is about 66% of potential infections ., The number goes up to 17 , or 74% for the budget of 1000 ., The result shows that even for a low budget ( less than 200 swabs per day ) , our approach could help prevent a significant number of future infections ., Next we study the types of agents that are selected by HaiDetect as sensors ., For this experiment , we use 100 randomly selected simulations to detect sensors for a wide range of budgets ., After the sensors are selected , we sum up the rates of each category of agents like nurses , doctors , patients , and so on ., Fig 11, ( a ) shows the distribution of sensor allocation for each category of agents at low budgets ., We observe that for a budget of 10 , nearly 60% of the total budget is spent on selecting nurses ., Since nurses are the most mobile agents , the result highlights the fact that HaiDetect selects the most important agents as sensors early on ., Similarly , Fig 11, ( b ) shows the distribution of sensors for higher budgets ., Here we observe that nearly 35% of the budget is allocated for nurses ., Fomites and patients have roughly equal allocations of about 20% ., 17% of the budget is allocated to doctors ., The rest of the categories have minimal allocation ., The distribution shows that HaiDetect selects heterogeneous sensors including both people and objects\/locations as intended ., Finally , we are also interested on the scheduling implications of the sensors selected by HaiDetect ., To this end , we measure the aggregated proportion of budget assigned to each rate for the sensors we select ., The results are summarized in Fig 12 ., As shown in Fig 12, ( a ) , most of the sensors have rate of 0 . 1 ., Very few sensors have rate from 0 . 2 to 0 . 5 ., Finally , there is a sudden spike at rate = 1 . 0 ., When we look at rate distribution for each category separately , interestingly we observe that only nurses have rates of 1 . 0 ., This implies that certain nurses have to be monitored each day to detect HAI outbreak ., The reason behind this unexpected behaviour can be attributed to the fact that the hospital from where the mobility log was collected , required all the nurses to attend a daily meeting ., Hence , all the nurses were in contact with each other every day and it is likely that nurses infect each other in case of an outbreak ., Hence , there is an advantage in monitoring some of the nurses everyday to quickly detect HAI outbreak ., Effective and early detection of HAI outbreaks are important problems in hospital infection control , and have not been studied systematically so far ., While these are challenging problems , understanding their structure can help in designing effective algorithms and optimizing resources ., Current practices in hospitals are fairly simple , and do not attempt to optimize resources ., Our algorithms perform better than many natural heuristics , and our results show that a combination of data and model driven approach is effective in detecting HAIs ., Since there is limited data on disease incidence , good models and simulations play an important role in designing algorithms and evaluating them .","headings":"Introduction, Materials and methods, Results, Discussion","abstract":"According to the Centers for Disease Control and Prevention ( CDC ) , one in twenty five hospital patients are infected with at least one healthcare acquired infection ( HAI ) on any given day ., Early detection of possible HAI outbreaks help practitioners implement countermeasures before the infection spreads extensively ., Here , we develop an efficient data and model driven method to detect outbreaks with high accuracy ., We leverage mechanistic modeling of C . difficile infection , a major HAI disease , to simulate its spread in a hospital wing and design efficient near-optimal algorithms to select people and locations to monitor using an optimization formulation ., Results show that our strategy detects up to 95% of \u201cfuture\u201d C . difficile outbreaks ., We design our method by incorporating specific hospital practices ( like swabbing for infections ) as well ., As a result , our method outperforms state-of-the-art algorithms for outbreak detection ., Finally , a qualitative study of our result shows that the people and locations we select to monitor as sensors are intuitive and meaningful .","summary":"Healthcare acquired infections ( HAIs ) lead to significant losses of lives and result in heavy economic burden on healthcare providers worldwide ., Timely detection of HAI outbreaks will have a significant impact on the health infrastructure ., Here , we propose an efficient and effective approach to detect HAI outbreaks by strategically monitoring selected people and locations ( sensors ) ., Our approach leverages outbreak data generated by calibrated mechanistic simulation of C . difficile spread in a hospital wing and a careful computational formulation to determine the people and locations to monitor ., Results show that our approach is effective in detecting outbreaks .","keywords":"medicine and health sciences, gut bacteria, medical personnel, sociology, social sciences, health care, simulation and modeling, health care providers, systems science, mathematics, network analysis, social networks, nosocomial infections, bacteria, allied health care professionals, research and analysis methods, clostridium difficile, infectious diseases, computer and information sciences, epidemiology, agent-based modeling, people and places, professions, nurses, population groupings, biology and life sciences, physical sciences, organisms","toc":null} +{"Unnamed: 0":1891,"id":"journal.pcbi.1003705","year":2014,"title":"Rethinking Transcriptional Activation in the Arabidopsis Circadian Clock","sections":"The task of the circadian clock is to synchronize a multitude of biological processes to the daily rhythms of the environment ., In plants , the primary rhythmic input is sunlight , which acts through photoreceptive proteins to reset the phase of the clock to local time ., The expression levels of the genes at the core of the circadian clock oscillate due to mutual transcriptional and post-translational feedbacks , and the complexity of the feedbacks makes it difficult to predict and understand the response of the system to mutations and other perturbations without the use of mathematical modelling 1 ., Early modelling of the system by Locke et al . demonstrated the feasibility of gaining new biological insights into the clock through the use of model predictions 2 ., The earliest model described the system as a negative feedback loop between the two homologous MYB-like transcription factors CIRCADIAN CLOCK ASSOCIATED 1 ( CCA1 ) and LATE ELONGATED HYPOCOTYL ( LHY ) 3 , 4 on one hand and TIMING OF CAB EXPRESSION 1 ( TOC1\/PRR1 ) 5 on the other ., Over the past decade , models have progressed to describing the system in terms of multiple interacting loops , still centred around LHY\/CCA1 ( treated as one component ) and TOC1 ., The latest published model by Pokhilko et al . ( 2013 ) describes transcriptional and post-translational interactions between more than dozen components ., We refer to that model as P2012 6 , in keeping with the tradition of naming the Arabidopsis clock models after author and submission year ( cf . L2005 2 , L2006 7 , P2010 8 and P2011 9 ) ., The clock depends on several genes in the PSEUDO RESPONSE REGULATOR ( PRR ) family: PRR9 , PRR7 , PRR5 , PRR3 and TOC1\/PRR1 are expressed in a clear temporal pattern , with PRR9 mRNA peaking in the morning , PRR7 and PRR5 before and after noon , respectively , and PRR3 and TOC1 near dusk 10 ., PRR9 , PRR7 and PRR5 act to repress expression of CCA1 and LHY during the day 11 , but , until recently , TOC1 was thought to be a nightly activator of CCA1 and LHY , acting through some unknown intermediate ., However , TOC1 has firmly been shown to be a repressor of both CCA1 and LHY , and it now takes its place in the models as the final repressor of the \u201cPRR wave\u201d 9 , 12\u201314 ., PRR3 has yet to be included in the clock models and the roles of the other PRRs are being reevaluated following the realization that TOC1 acts as a repressor 15 ., The GIGANTEA ( GI ) protein has long been thought to form part of the clock 16 , whereas EARLY FLOWERING 3 ( ELF3 ) was known to affect clock function 17 but was only more recently found to be inside the clock , rather than upstream of it 18 , 19 ., GI and ELF3 interact with each other and with other clock-related proteins such as the E3 ubiquitin-ligase COP1 20 ., GI plays an important role in regulating the level and activity of ZEITLUPE ( ZTL ) 21 , which in turn affects the degradation of TOC1 22 and PRR5 23 but not of the other PRRs 24 ., The clock models by Pokhilko et al . include GI and ZTL; GI regulates the level of ZTL by sequestering it in a GI-ZTL complex during the day and releasing it at night 8 ., Together with EARLY FLOWERING 4 ( ELF4 ) and LUX ARRHYTHMO ( LUX ) , ELF3 is necessary for maintaining rhythmicity in the clock 25\u201327 ., The three proteins are localized to the nucleus , and ELF3 is both necessary and sufficient for binding ELF4 and LUX into a complex termed the evening complex ( EC ) 19 ., In recent models , EC is a major repressor; it was introduced in P2011 to repress the transcription of PRR9 , LUX , TOC1 , ELF4 and GI 9 ., We here present a model ( F2014 ) of the circadian clock in Arabidopsis , extending and revising the earlier models by Pokhilko et al . ( P2010\u2013P2012 ) ., To incorporate as much as possible of the available knowledge about the circadian clock into the framework of a mathematical model , we have compiled a large amount of published data to use for model fitting ., These curated data are made available for download as described in Methods ., The aim of this work is to clarify the role of transcriptional activation in the Arabidopsis circadian clock ., Specifically , we use modelling to test whether the available data are compatible with models with and without activation ., There is no direct experimental evidence for any of the activators postulated in earlier models , and as a crucial step in remodelling the system we have removed all transcriptional activation from the equations ., Instead , we have added a major clock component missing from earlier models: the transcription factor REVEILLE 8 ( RVE8 ) , which positively regulates the expression of a large fraction of the clock genes 28 , 29 ., A further addition is the nightly transcription factor NOX\/BROTHER OF LUX ARRHYTHMO ( NOX\/BOA ) , which is similar to LUX but may also act as an activator of CCA1 30 ., By examining transcriptional activation within the framework of our model , we have clarified the relative contributions of the activators to their different targets ., Overexpression of ELF3 rescues clock function in the otherwise arrythmic elf4-1 mutant 27 ., This suggests that the function of ELF4 is to amplify the effects of ELF3 through the ELF3-ELF4 complex , which led us to consider an evening complex ( EC ) where free ELF3 protein can play the role of ELF3-ELF4 , albeit with highly reduced efficacy ., This , together with our aim to add the NOX protein in parallel with LUX , as described in the next section , prompted us to rethink how to model this part of the clock ., EC is not given its own variable in the differential equations , unlike in the earlier models ., Instead , EC activity is seen as rate-limited by LUX and NOX on one hand and by ELF3-ELF4 and free ELF3 on the other ., In either pair , the first component is given higher importance , in accordance with previous knowledge ., For details , see the equations in Text S1 ., This simplified description requires few parameters , which was desirable because the model had to be constrained using time course data for the individual components of EC , mainly at the mRNA level ., The effects of our changes to EC are illustrated in Figure 2 , which shows EC and related model components in the transition from cycles of 12 h light , 12 h dark ( LD 12:12 ) to constant light ( LL ) ., ELF3 , which is central to EC in our model , behaved quite differently at the mRNA level compared with the P2011 and P2012 models , and more closely resembled the available experimental data , with a broad nightly peak and a trough in the morning at zeitgeber time ( ZT ) 0\u20134 ( Figure 2A ) ., The differences in the dynamics of the EC components between our eight parameter sets demonstrate an interesting and more general point: The components that are most reliably constrained are not always those that were fitted to measured data ., In our case , the model was fitted to data for the amount of ELF3 mRNA ( Figure 2A ) and total ELF3 protein ( not shown ) , but the distribution between free ELF3 and ELF3 bound in the ELF3-ELF4 complex was not directly constrained by any data ., As expected , the variation between parameter sets was indeed greater for the levels of free ELF3 protein and the ELF3-ELF4 complex , as shown in Figure 2B\u2013C ., However , the predicted level of EC ( Figure 2D ) showed less variation than even the experimentally constrained ELF3 mRNA ., This indicates that the shape and timing of EC were of such importance that the EC profile was , in effect , tightly constrained by data for the seven EC repression targets ( PRR9 , PRR7 , PRR5 , TOC1 , GI , LUX and ELF4 ) ., NOX is a close homologue of LUX , with a highly similar DNA-binding domain and a similar expression pattern which peaks in the evening ., Like LUX , NOX can form a complex with ELF3 and ELF4 , but it is only partially redundant with LUX , which has a stronger clock phenotype 31 ., The recruitment of ELF3 to the PRR9 promoter is reduced in the lux-4 mutant and abolished in the LUX\/NOX double amiRNA line 32 ., To explain these findings , we introduced NOX into the model as a component acting in parallel with LUX; we assumed that NOX and LUX play similar roles as transcriptional repressors in the evening complex ., There is evidence that NOX binds to the promoter of CCA1 ( and possibly LHY ) in vivo and activates its transcription ., Accordingly , the peak level of CCA1 expression is higher when NOX is overexpressed , and the period of the clock is longer 30 ., This possible role of NOX as an activator fits badly with its reported redundancy with LUX as a repressor ., In an attempt to resolve this issue , we first modelled the system with NOX only acting as a repressor in EC , and then investigated the effects of adding the activation of CCA1 expression ., Figure 3 illustrates the role of NOX in the model in comparison with LUX ., The differences in their expression profiles ( Figure 3A\u2013B ) reflect the differences in their transcriptional regulation ( cf . Figure 1 ) ., CCA1 expression is decreased only marginally in the nox mutant ( Figure 3C\u2013D ) but more so in lux ( Figure 3E ) ., Because of the redundancy between NOX and LUX , the model predicted that the double mutant lux;nox has a stronger impact on circadian rhythms , with CCA1 transcription cut at least in half compared with lux ( Figure S2A ) ., According to the model , the loss of LUX and NOX renders the evening complex completely ineffective , which in turn allows the PRR genes ( including TOC1 ) to be expressed at high levels and thereby repress LHY and CCA1 ., A comparison with the P2011 and P2012 models , which include LUX but not NOX , is shown in Figure 3B , C and E . Here , the most noticeable improvement in our model was the more accurate peak timing after entry into LL , where in the earlier models the clock phase was delayed during the first subjective night 33 ., Period lengthening and increased CCA1 expression was observed in NOX-ox only for some of the parameter sets ( Figure 3F ) ., The four parameter sets with increased CCA1 all had a very weakly repressing NOX whose main effect was to counter LUX by taking its place in EC ., Removing NOX from EC in the equations and reoptimizing a relevant subset of the parameters worsened the fit to the data ( Figure S3 ) ., These results support the idea of NOX acting through EC in manner that makes it only partially redundant with LUX ., The possibility that NOX is a transcriptional activator of CCA1 and LHY was probed by adding an activating term to the equations ( see Text S1 ) and reoptimizing the parameters that control transcription of CCA1 and LHY ., The resulting activation was very weak in all parameter sets , and had negligible effect on the expression of CCA1 in NOX-ox ( Figure S2B\u2013C ) ., Accordingly , the addition of the activation term did not improve the fit to data as measured by the cost function described in Methods ( Figure S3 ) ., In earlier models that included the PRR genes , the PRRs were described as a series of activators; during the day , PRR9 activated the transcription of PRR7 , which similarly activated PRR5 ., These interactions improved the clocks entrainability to different LD cycles 8 ., However , this sequential activation disagrees with experimental data for prr knockout mutants , which indicate that loss of function of one PRR leaves the following PRR virtually unaffected ., For instance , experiments have shown that the expression levels of PRR5 and TOC1 ( as well as LHY and CCA1 ) are unaffected in both prr9-1 and prr7-3 knockout mutants 11 , 34 ., Instead , direct interactions between the PRRs have been found to be negative and directed from the later PRRs in the sequence to the earlier ones 15 , 35 ., A strong case has been made for TOC1 as a repressor of the PRR genes 9 , 14 ., As in P2012 , we modelled transcription of PRR9 , PRR7 and PRR5 as repressed by TOC1 , but we also included negative auto-regulation of TOC1 , as suggested by the ChIP-seq data that identified the TOC1 target genes 14 ., Likewise , PRR5 directly represses expression of PRR9 and PRR7 35 , and we have added these interactions to the model ., As illustrated in Figure 4A\u2013C , this reformulation of the PRR wave is compatible with correct timing of the expression of the PRRs in the wild type , and the timing and shape of the expression curves were improved compared with the P2012 model ., An earlier version of our model gave similar profiles despite missing the repression by PRR5 , which suggests that such repression is not of great importance to the clock ., A nightly repressor appears to be acting on the PRR7 promoter , as seen in the rhythmic expression of PRR7 in LD in the cca1-11;lhy-21;toc1-21 mutant 36 ., An observed increase in PRR7 expression at ZT 0 in the lux-1 mutant relative to wild type 29 points to EC as a possible candidate ., Although Helfer et al . report that LUX does not bind to the LUX binding site motif found in the PRR7 promoter 31 , we included EC among the repressors of PRR7 ., This interaction was confirmed by Mizuno et al . while this manuscript was in review 37 , demonstrating the power of modelling and of timely publication of models ., We further let EC repress PRR5 ., We are not aware of any evidence for such a connection , but the parameter fitting consistently assigned a high value to the connection strength , as was also the case with PRR7 ., This result hints that nightly repression of PRR5 is of importance , whether it is caused by EC or some related clock component ., The real test of the model came with knocking out members of the PRR wave ., Here , the model generally outperformed the P2012 model , as judged by eye , but we are missing data for some important experiments such as PRR7 in prr9 ., As an example , Figure 4D shows the level of PRR5 protein in the prr9;prr7 double mutant , where half of our parameter sets predict the correct profile and peak phase ., In the earlier models , the only remaining inputs to PRR5 were ( a hypothetical delayed LHY\/CCA1 ) , TOC1 ( in P2012 only ) and light ( which stabilized the protein ) , and these were unable to shape the PRR5 profile correctly ., The crucial difference in our model was the repression of PRR5 by CCA1 and LHY , as described in the next section ., CCA1 and LHY appear to work as transcriptional repressors in most contexts in the clock ( see e . g . 38 ) , but knockdown and overexpression experiments seem to suggest that they act as activators of PRR9 and PRR7 34 ., Accordingly , previous models have used activation by LHY\/CCA1 , combined with an acute light response , to accomplish the rapid increase observed in PRR9 mRNA in the morning ., However , with the misinterpretation of TOC1 regulation of CCA1 12 in mind , we were reluctant to assume that the activation is a direct effect ., To investigate this issue , we modelled the clock with CCA1 and LHY acting as repressors of all four PRRs ., If repression was incompatible with the data for any of the PRRs , parameter fitting should reduce the strength of that repression term to near zero ., As is shown in Figure 4E , the model consistently made CCA1 and LHY strongly repress PRR5 and TOC1 ., PRR7 was also repressed , but in a narrower time window that acted to modulate the phase of its expression peak ., In contrast , PRR9 was virtually unaffected; CCA1 and LHY do not directly repress PRR9 in the model ., Even though CCA1 and LHY were not modelled as activators , the model reproduced the reduction in PRR9 expression observed in the cca1-11;lhy-21 double mutant ( Figure 4F and Figure S4 ) ., PRR7 behaved similarly to PRR9 in both experiments and model ., Conversely , in the P2011 and P2012 models , where LHY\/CCA1 was supposed to activate PRR9 , there was no reduction in the peak level of PRR9 mRNA in cca1;lhy compared to wild type ( Figure S5A ) ., To explore whether CCA1 and LHY may be activating PRR9 transcription , we temporarily added an activation term to the equations ( see Text S1 ) and reoptimized the relevant model parameters ., The activation term came to increase PRR9 expression around ZT 2 at least twofold in two of the eight parameter sets , and by a smaller amount in several ( Figure S5B ) ., This would seem to suggest that activation improved the fit between data and model ., Surprisingly , there was no improvement as measured by the cost function ( Figure S3 ) ., With the added activation , PRR9 was reduced only marginally more in cca1;lhy than in the original model ( Figure S5C ) ., A likely explanation is that feedbacks through EC and TOC1 , which repress PRR9 , almost completely negate the removed activation of PRR9 in the cca1;lhy mutant ., Thus the model neither requires nor rules out activation of PRR9 by CCA1 and LHY ., Like CCA1 and LHY , RVE8 is a morning expressed MYB-domain transcription factor ., However , unlike CCA1 and LHY , RVE8 functions as an activator of genes with the evening element motif , and its peak activity in the afternoon is strongly delayed in relation to its expression 28 ., Based on experimentally identified targets , we introduced RVE8 into our model as an activator of the five evening expressed clock components PRR5 , TOC1 , GI , LUX and ELF4 , as well as the morning expressed PRR9 29 ., PRR5 binds directly to the promoter of RVE8 to repress its transcription 35 , and it is likely that PRR7 and PRR9 share this function 28 , 29 ., Using only these three PRRs as repressors of RVE8 was sufficient to capture the expression profile and timing of RVE8 , both in LL and LD ( Figure 5A ) ., RVE8 is partially redundant with RVE4 and RVE6 28 , which led us to model the rve8 mutant as a 60% reduction in the production of RVE8 ., To clearly see the effects of RVE8 in the model , we instead compared with the rve4;rve6;rve8 triple mutant , which we modelled as a total knockout of RVE8 function ., The phase of the clock was delayed in LD , and the period lengthened by approximately two hours in LL in the simulated triple mutant , in agreement with with data for LHY ( Figure 5B\u2013C ) , though we note that CAB::LUC showed a greater period lengthening in experiments 29 ., To investigate the significance of RVE8 as an activator in the model , we made a version of the model without RVE8 ., The model parameters were reoptimized against the time course data ( excluding data for RVE8 and from rve mutants ) ., As with NOX , we found that removing the activation had no clear effect on the costs of the parameter sets after refitting ( Figure S3 ) ., It appears that activators such as RVE8 are not necessary for clock function ., Still , the effects of the rve mutants can only be explained when RVE8 is present in the model , motivating its inclusion ., The model used RVE8 as an activator for four of its targets in a majority of the parameter sets ( Figure 5D\u2013F ) ., The exceptions were TOC1 and ELF4 ., Although TOC1 is a binding target of RVE8 in vivo , TOC1 expression is not strongly affected by RVE8-ox or rve8-1 28 , 39 ., This was confirmed by our model , where the parameter fitting disfavoured the activation of TOC1 in most of the parameter sets ( Figure 5E ) ., The eight parameter sets may not represent an exhaustive exploration of the parameter space , but the results nevertheless support the notion that the effect of RVE8 on TOC1 is of marginal importance ., Constraining the many parameters in our model requires a cost function based on a large number of experiments ., To this end , we compiled time course data from the published literature , mainly by digitizing data points from figures using the free software package g3data 40 ., We extracted more than 11000 data points from 800 time courses in 150 different mutants or light conditions , from 59 different papers published between 1998 and 2013 ., The median time resolution was 3 hours ., The list of time courses and publications can be found in Text S2 , and the raw time course data and parameter values are available for download from http:\/\/cbbp . thep . lu . se\/activities\/clocksim ., Most of the compiled data refer to the mRNA level , from measurements using Northern blots or qPCR , but there are also data at the protein level ( 67 time courses ) and measurements of gene expression using luciferase assays ( 12 time courses ) ., About one third of the time courses can be considered as replicates , mainly from wild type plants in the most common light conditions ., Many of these data are controls for different mutants ., Where wild type and mutant data were plotted with the same normalization , we made note of this , as their relative levels provide crucial information that is lost if the curves are individually normalized ., To find suitable values for the model parameters , we constructed a minimalistic cost function based on the mean squared error between simulations and time course data ., This approach was chosen to allow the model to capture as many features of the gene expression profiles as possible , with a minimum of human input ., The cost function consists of two parts , corresponding to the profiles and levels of the time course data , respectively ., For each time course with experimental data points the corresponding simulated data were obtained from the model ., The simulations were performed with the mutant background represented in the model equations , with entrainment for up to 50 days in light\/dark cycles followed by measurements , all in the experimental light conditions ., The cost for the concentration profile was computed as ( 1 ) Since the profile levels are thus normalized , eq ., ( 1 ) is independent of the units of measurements ., The parameters ( see Text S2 for values ) allowed us to weight time courses to reflect their relative importance , e . g . where less data was available to constrain some part of the model ., Where several experimental time courses had the same normalization , e . g . in comparisons between wild type and mutants , the model should reproduce the relative changes in expression levels between the time courses ., For each group of time courses , we could minimize the sum ( 2 ) Unlike eq ., ( 1 ) , the nominators in this sum are guaranteed to be non-zero , which allows us to operate in log-space where fold changes up or down from the mean will be equally penalized ., Replacing with and likewise for we write the final scaling cost for group as ( 3 ) This cost term thus penalizes non-uniform scaling between experiment and data within the group ., The total cost to minimize was ( 4 ) where sets the balance between fitting the simulation to the profile or the level of the data ., We used A downside to our approach is that period and phase differences between different data sets result in fitting to a mean behaviour that is more damped than any individual data set ., To reduce this problem , we removed the most obvious outliers from the fitting procedure ., We also considered distorting the time axis ( e . g . dynamic time warping ) to normalize the period of oscillations in constant conditions , in order to better capture the effects of mutants relative to the wild type ., This process would be cumbersome and arbitrary , which is why it was deemed outside the scope of our efforts ., Compared to previous models by Pokhilko et al . , fewer parameters were manually constrained in our model ., In the P2010\u2013P2012 models , roughly 40% of the parameters were constrained based on the experimental data 6 , 8 , 9 , and the remaining free parameters were fitted to mRNA profiles in LD and the free running period in LL and DD ( constant dark ) in wild type and mutants 9 ., For the F2014 model , we completely constrained 16 parameters in order to obtain correct dynamics for parts of the system where we lacked sufficient time course data ., Specifically , the parameters governing COP1 were taken from P2011 where they were introduced , whereas the parameters for the ZTL and GI proteins ( except the GI production and transport rates ) were fitted by hand to the figures in 41 ., All other parameters were fitted to the collected time course data through the cost function ., The eight parameter sets presented here were selected from a group of 30 , where each was independently seeded from the best of 1000 random points in parameter space , then optimized using parallel tempering for iterations at four different temperatures which were gradually lowered ., The resulting parameter values , which are listed in Text S1 , typically span at least an order of magnitude between the different parameter sets ( Figure S6 ) ., The sensitivity of the cost function to parameter perturbations is presented in Figure S7 and further discussed in Text S1 ., Plots of the single best parameter set against all experimental data is shown in Figure S8 ., To simulate the system and evaluate the cost function rapidly enough for parameter optimization to be feasible , we developed a C++ program that implements ODE integration and parameter optimization using the GNU Scientific Library 42 ., Evaluating the cost function for a single point in parameter space , against the full set of experiments and data , took about 10 seconds on a 3 GHz Intel Core i7 processor ., Our software is released under the GNU General Public License ( GPL ) 43 and is available from http:\/\/cbbp . thep . lu . se\/activities\/clocksim\/ ., Accurately modelling the circadian clock as a network of a dozen or more genes is challenging ., Previous modelling work ( e . g . P2010\u2013P2012 ) 6 , 8 , 9 has drawn on existing data and knowledge to constrain the models , but as the amount of data increases it becomes ever more difficult to keep track of the effects of mutations and other perturbations ., For a system as large as the plant circadian clock , it is desirable to automate the parameter search as much as possible , but encoding the uncertainties surrounding experimental data in a computer-evaluated cost function is not trivial ., Our modelling demonstrates the feasibility of fitting a model of an oscillating system against a large set of data without the construction of a complicated cost function based on qualitative aspects of the model output , such as entrainability , free-running period or amplitude ., Instead , we relied on the large amount of compiled time course data to constrain the model , using a direct comparison between simulations and data ., This minimalistic cost function had the additional advantage of allowing the use of time courses that span a transition in environmental conditions , e . g . from rhythmic to constant light , where the transient behaviour of the system may contain valuable information ., Consequently , our model correctly reproduces the phase of the clock after such transitions ( see e . g . Figure 3C ) ., Our approach makes it easy to add new data , at the price of ignoring previous knowledge ( e . g . , clock period ) from reporters that are not represented in the model ., Accordingly , our primary modelling goal was not to reproduce the correct periods of different clock mutants , but rather to capture the profiles of mRNA and protein curves , and the changes in amplitude and profile between mutants and different light conditions ., Compiling a large amount of data from different sources has allowed us to see patterns in expression profiles that were not apparent without independent replication ., For example , the TOC1 mRNA profile shows a secondary peak during the night in many data sets ( see examples in Figure 4B ) ., All collected time course data were used in fitting the parameters ., To validate the model , we instead used independently obtained period data from clock period mutants ., The results are shown in Text S1 ., In brief , most predictions in LL are in good agreement with experiments , with the exception of elf4 where the period changes in the wrong direction ., To experimentally measure a specific parameter value , such as the nuclear translocation rate of a protein , is exceptionally challenging ., Hence , constraining a model with measured parameters can introduce large uncertainties in the model predictions , especially when the understanding of the full system is incomplete ., Fitting the model with free parameters can instead give a large spread in individual parameter values , but result in a set of models that make well constrained predictions ., For this reason , we have based our results on an ensemble of independently optimized parameter sets , as recommended by Gutenkunst et al . 44 ., At the cost of computational time , this approach gives a more accurate picture of the uncertainties in the model and its predictions , rather than focusing on individual parameter values ., Based on our experience of curation of time course data , we offer some suggestions for how data can be compiled and treated to be more useful to modellers ., These points arose in the context of the circadian clock , but they apply to experiments that are to be used for modelling in a broader context ., Two of these suggestions concern the preservation of information about the relative expression levels between experiments ., One example of the value of such information comes from the dramatic reduction in PRR9 expression in cca1;lhy ( Figure 4F ) ., As implied in the section on PRR9 activation in Results , clock models ought to be able to explain both shape and level of expression curves in such mutant experiments , but this is only possible if that information is present in the data ., Based on the current knowledge of the clock , most clock components are exclusively or primarily repressive , and RVE8 sets itself apart by functioning mainly ( or solely ) as an activator ., According to our model , RVE8 has only a marginal effect on the expression of TOC1 , but activates PRR5 and other genes more strongly , in agreement with earlier interpretations of the experimental data 29 ., We note that all six targets of RVE8 in the model ( PRR9 , PRR5 , TOC1 , GI , LUX and ELF4 ) are also binding targets of TOC1 14 ., This may be a coincidence , because TOC1 is a repressor of a majority of the genes in the model ., It is conceivable , however , that activation by RVE8 around noon is gated by TOC1 to confer sensitivity to the timing of RVE8 relative to TOC1 in a controlled fashion ., We were surprised by the ease with which we could remove RVE8 from the model ., After reoptimization of the parameters , the cost was decreased in three of the eight parameter sets compared with the original model ( Figure S3 ) ., Thus , the clock is not dependent on activation for its function ( although it should be noted that the model without RVE8 lost the ability to explain any RVE8-related experiments ) ., This result indicates that the model possesses a high degree of flexibility , whereby the remaining components and parameters are able to adjust and restore the behaviour of the system ., Such flexibility challenges our ability to test hypotheses about individual interactions in the model , but we argue that predictions can also be made based on entropy ., Even if an alteration to the model , such as the addition of RVE8 , does not result in a significant change in the cost function , it may open up new parts of the high-dimensional parameter space ., If , following local optimization , most parameter sets indicate that a certain interaction is activating , we may conclude that the activation is likely to be true ., The parameter space is sampled in accordance with the prior belief that the model should roughly minimize the cost function , and the same reasoning motivates the use an ensemble of parameter sets to explore the model ., The conclusion about activation is indeed strengthened by the use of multiple parameter sets , because we learn whether it is valid in different areas of the parameter space ., Our model agrees with a majority of the compiled data sets , but like earlier models it also fails to fit to data for some mutants ., This indicates that important clock components or interactions may yet be unknown or misinterpreted ., We here give a few examples ., ","headings":"Introduction, Results, Methods, Discussion","abstract":"Circadian clocks are biological timekeepers that allow living cells to time their activity in anticipation of predictable daily changes in light and other environmental factors ., The complexity of the circadian clock in higher plants makes it difficult to understand the role of individual genes or molecular interactions , and mathematical modelling has been useful in guiding clock research in model organisms such as Arabidopsis thaliana ., We present a model of the circadian clock in Arabidopsis , based on a large corpus of published time course data ., It appears from experimental evidence in the literature that most interactions in the clock are repressive ., Hence , we remove all transcriptional activation found in previous models of this system , and instead extend the system by including two new components , the morning-expressed activator RVE8 and the nightly repressor\/activator NOX ., Our modelling results demonstrate that the clock does not need a large number of activators in order to reproduce the observed gene expression patterns ., For example , the sequential expression of the PRR genes does not require the genes to be connected as a series of activators ., In the presented model , transcriptional activation is exclusively the task of RVE8 ., Predictions of how strongly RVE8 affects its targets are found to agree with earlier interpretations of the experimental data , but generally we find that the many negative feedbacks in the system should discourage intuitive interpretations of mutant phenotypes ., The dynamics of the clock are difficult to predict without mathematical modelling , and the clock is better viewed as a tangled web than as a series of loops .","summary":"Like most living organisms , plants are dependent on sunlight , and evolution has endowed them with an internal clock by which they can predict sunrise and sunset ., The clock consists of many genes that control each other in a complex network , leading to daily oscillations in protein levels ., The interactions between genes can be positive or negative , causing target genes to be turned on or off ., By constructing mathematical models that incorporate our knowledge of this network , we can interpret experimental data by comparing with results from the models ., Any discrepancy between experimental data and model predictions will highlight where we are lacking in understanding ., We compiled more than 800 sets of measured data from published articles about the clock in the model organism thale cress ( Arabidopsis thaliana ) ., Using these data , we constructed a mathematical model which compares favourably with previous models for simulating the clock ., We used our model to investigate the role of positive interactions between genes , whether they are necessary for the function of the clock and if they can be identified in the model .","keywords":"systems biology, physiological processes, computer and information sciences, network analysis, physiology, chronobiology, biology and life sciences, regulatory networks, computational biology, computerized simulations","toc":null} +{"Unnamed: 0":521,"id":"journal.pcbi.1006080","year":2018,"title":"Bamgineer: Introduction of simulated allele-specific copy number variants into exome and targeted sequence data sets","sections":"The emergence and maturation of next-generation sequencing technologies , including whole genome sequencing , whole exome sequencing , and targeted sequencing approaches , has enabled researchers to perform increasingly more complex analysis of copy number variants ( CNVs ) 1 ., While genome sequencing-based methods have long been used for CNV detection , these methods can be confounded when applied to exome and targeted sequencing data due to non-contiguous and highly-variable nature of coverage and other biases introduced during enrichment of target regions1\u20135 ., In cancer , this analysis is further challenged by bulk tumor samples that often yield nucleic acids of variable quality and are composed of a mixture of cell-types , including normal stromal cells , infiltrating immune cells , and subclonal cancer cell populations ., Circulating tumor DNA presents further challenges due to a multimodal DNA fragment size distribution and low amounts of tumor-derived DNA in blood plasma ., Therefore , development of CNV calling methods on arbitrary sets of tumor-derived data from public repositories may not reflect the type of tumor specimens encountered at an individual centre , particularly formalin-fixed-paraffin embedded tissues routinely profiled for diagnostic testing ., Due to lack of a ground truth for validating CNV callers , many studies have used simulation to model tumor data6 ., Most often , simulation studies are used in an ad-hoc manner using customized formats to validate specific tools and settings with limited adaptability to other tools ., More generalizable approaches aim at the de novo generation of sequencing reads according to a reference genome ( e . g . wessim3 , Art-illumina7 , and dwgsim8 ., However , de novo simulated reads do not necessarily capture subtle features of empirical data , such as read coverage distribution , DNA fragment insert size , quality scores , error rates , strand bias and GC content6; factors that can be more variable for exome and targeted sequencing data particularly when derived from clinical specimens ., Recently , Ewing et al . developed a tool , BAMSurgeon , to introduce synthetic mutations into existing reads in a Binary alignment Mapping ( BAM ) file9 ., BAMSurgeon provides support for adjusting variant allele fractions ( VAF ) of engineered mutations based on prior knowledge of overlapping CNVs but does not currently support direct simulation of CNVs themselves ., Here we introduce Bamgineer , a tool to modify existing BAM files to precisely model allele-specific and haplotype-phased CNVs ( Fig 1 ) ., This is done by introducing new read pairs sampled from existing reads , thereby retaining biases of the original data such as local coverage , strand bias , and insert size ., As input , Bamgineer requires a BAM file and a list of non-overlapping genomic coordinates to introduce allele-specific gains and losses ., The user may explicitly provide known haplotypes or chose to use the BEAGLE10 phasing module that we have incorporated within Bamgineer ., We implemented parallelization of the Bamgineer algorithm for both standalone and high performance computing cluster environments , significantly improving the scalability of the algorithm ., Overall , Bamgineer gives investigators complete control to introduce CNVs of arbitrary size , magnitude , and haplotype into an existing reference BAM file ., We have uploaded all software code to a public repository ( http:\/\/github . com\/pughlab\/bamgineer ) ) ., For all proof-of-principle experiments , we used exome sequencing data from a single normal ( peripheral blood lymphocyte ) DNA sample ., DNA was captured using the Agilent SureSelect Exome v5+UTR kit and sequenced to 220X median coverage as part of a study of neuroendocrine tumors ., Reads were aligned to the hg19 build of the human genome reference sequence and processed using the Genome Analysis Toolkit ( GATK ) Best Practices pipeline ., Following the validation of our tool for readily-detected chromosome- and arm-level events , we next used Bamgineer to simulate CNV profiles mimicking 3 exemplar tumors from each of 10 different cancer types profiled by The Cancer Genome Atlas using the Affymetrix SNP6 microarray platform: lung adenocarcinoma ( LUAD ) ; lung squamous cell ( LUSC ) ; head and neck squamous cell carcinoma ( HNSC ) ; glioblastoma multiformae ( GBM ) ; kidney renal cell carcinoma ( KIRC ) ; bladder ( BLCA ) ; colorectal ( CRC ) ; uterine cervix ( UCEC ) ; ovarian ( OV ) , and breast ( BRCA ) cancers ( Table 1 ) ., To select 3 exemplar tumors for each cancer type , we chose profiles that best represented the copy number landscape for each cancer type ., First , we addressed over-segmentation of the CNV calls from the microarray data by merging segments of <500 kb in size with the closest adjacent segment and removing the smaller event from the overlapping gain and loss regions ., We then assigned a score to each tumor that reflects its similarity to other tumor of the same cancer type ( S7 Fig ) ., This score integrates total number of CNV gain and losses ( Methods , Eq 6 ) , median size of each gain and loss , and the overlap of CNV regions with GISTIC peaks for each cancer type as reported by The Cancer Genome Atlas ( Table 1 ) ., We selected three high ranking tumors for each cancer type such that , together , all significant GISTIC15 peaks for that tumor type were represented ., A representative profile from a single tumor is shown in Fig 2C ., Subsequently , for each of the 30 selected tumor profiles ( 3 for each of 10 cancer types ) , we introduced the corresponding CNVs at 5 levels of tumor cellularity ( 20 , 40 , 60 , 80 , and 100% ) resulting in 150 BAM files in total ., For each BAM file , we used Sequenza to generate allele-specific copy number calls as done previously ., Tumor\/normal log2 ratios are shown in Fig 3 for one representative from each cancer type ., From this large set of tumors , we next set out to compare Picard metrics and CNV calls as we did for the arm- and chromosome-level pilot ., We evaluated Bamgineer using several metrics: tumor allelic ratio , SNP phasing consistency , and tumor to normal log2 ratios ( Fig 4 ) ., As expected , across all regions of a single copy gain , tumor allelic ratio was at ~0 . 66 ( interquartile range: 0 . 62\u20130 . 7 ) for the targeted haplotype and 0 . 33 ( interquartile range: 0 . 3\u20130 . 36 ) for the other haplotype ., As purity was decreased , we observed a corresponding decrease in allelic ratios , from 0 . 66 down to 0 . 54 ( interquartile range: 0 . 5\u20130 . 57 ) for targeted and an increase ( from 0 . 33 ) to 0 . 47 ( interquartile range: 0 . 43\u20130 . 5 ) for the other haplotype for 20% purity ( Fig 4A and 4B ) ., These changes correlated directly with decreasing purity ( R2 > 0 . 99 ) for both haplotypes ., Similarly , for single copy loss regions , as purity was decreased from 100% to 20% the allelic ratio linearly decreased ( R2 > 0 . 99 ) from ~0 . 99 ( interquartile range: 0 . 98\u20131 . 0 ) for targeted haplotype to ~0 . 55 ( interquartile range: 0 . 51\u20130 . 58 ) for targeted haplotype and increases from 0 to ~0 . 43 ( interquartile range: 0 . 4\u20130 . 46 ) for the other haplotype ( Fig 4B ) ., The results for log2 tumor to normal depth ratios of segments normalized for average ploidy were also consistent with the expected values ( Methods , Eq 2 ) ., For CNV gain regions , log2 ratio decreased from ~0 . 58 ( log2 of 3\/2 ) to ~0 . 13 as purity was decreased from 100% to 20% ., For CNV loss regions , as purity was decreased from 100% to 20% , the log2 ratio increased from -1 ( log2 of 1\/2 ) to -0 . 15 , consistent with Eq 2 ( Fig 4C; S1-S4 for individual cancers ) ., Ultimately , we wanted to assess whether Bamgineer was introducing callable CNVs consistent with segments corresponding to the exemplar tumor set ., To assess this , we calculated an accuracy metric ( Fig 4D ) as:, accuracy=TP+TFTP+TF+FP+FN, where TP , TF , FP and FN represent number of calls from Sequenza corresponding to true positives ( perfect matches to desired CNVs ) , true negatives ( regions without CNVs introduced ) , false positives ( CNV calls outside of target regions ) and false negatives ( target regions without CNVs called ) ., TP , TF , TN , FN were calculated by comparing Sequenza absolute copy number ( predicted ) to the target regions for introduction of 1 Mb CNV bins across the genome ., As tumor content decreased , accuracy for both gains and losses decreased as false negatives became increasingly prevalent due to small shifts in log2 ratios ., We note that ( as expected ) , decreasing cancer purity from 100% to 20% generally decreases the segmentation accuracy ., Additionally , we observe that segmentation accuracy is on average , significantly higher for gain regions compared to the loss regions for tumor purity levels below 40% ( Fig 4D ) ., This is consistent with previous studies that show the sensitivity of CNV detection from sequencing data is slightly higher for CNV gains compared to CNV losses16 ., We also note that with decreasing cancer purity , the decline in segmentation accuracy follows a linear pattern of decline for gain regions and an abrupt stepwise decline for loss regions ( Fig 4D; segmentation accuracies are approximately similar for 40% and 20% tumor purities ) ., Finally , we observed a degree of variation in terms of segmentation accuracy across individual cancer types ( S1\u2013S4 Figs ) ., Segmentation accuracy was lower for LUAD , OV and UCEC compared to other simulated cancer types for this study ., The relative decline in performance is seen in cancer types where CNV gains and losses cover a sizeable portion of the genome; and hence , the original loss and gain events sampled from TCGA had significant overlaps ., As a result , after resolving overlapping gain and loss regions ( S7 Fig ) , on average , the final target regions constitute a larger number of small ( < 200 kb ) loss regions immediately followed by gain regions and vice versa; making the accurate segmentation challenging for the CBS ( circular binary segmentation ) algorithm implemented by Sequenza relying on presence of heterozygous SNPs ., This can cause uncertainties in assignments of segment boundaries ., In summary , application of an allele-specific caller to BAMs generated by Bamgineer recapitulated CNV segments consistent with >95% ( medians: 95 . 1 for losses and 97 . 2 for gains ) of those input to the algorithm ., However , we note some discrepancies between the expected and called events , primarily due to small CVNs as well as large segments of unprobed genome between exonic sequences ., To evaluate the use of Bamgineer for circulating tumor DNA analysis , we simulated the presence of an EGFR gene amplification in read alignments from a targeted 5-gene panel ( 18 kb ) applied to a cell-free DNA from a healthy donor and sequenced to >50 , 000X coverage ., To mirror concentrations of tumor-derived fragments commonly encountered in cell-free DNA17 , 18 , we introduced gain of an EGFR haplotype at frequencies of 100 , 10 , 1 , 0 . 1 , and 0 . 01% ., This haplotype included 3 SNPs covered by our panel , which were phased and subject to allele-specific gain accordingly ., As with the exome data , we observed shifts in coverage of specific allelic variants , and haplotype representation consistent with the targeted allele frequencies ( Fig 5A , Supplemental S1 Table ) ., Furthermore , read pairs introduced to simulate gene amplification retain the bimodal insert size distribution characteristic of cell-free DNA fragments ( Fig 5B and 5C ) ., While this experiment showcases the ability of Bamgineer to faithfully represent features of original sequencing data while controlling allelic amplification at the level of the individual reads , these subtle shifts are currently beyond the sensitivity of conventional CNV callers when applied to small , deeply covered gene panels ., Therefore , it is our hope that Bamgineer may be of value to aid develop of new methods capable of detecting copy number variants supported by a small minority of DNA fragments in a specimen ., Bamgineer is computationally intensive and the runtime of the program is dictated by the number of reads that must be processed , a function of the coverage of the genomic footprint of target regions ., To ameliorate the computational intensiveness of the algorithm , we employed a parallelized computing framework to maximize use of a high-performance compute cluster environment when available ., We took advantage of two features in designing the parallelization module ., First , we required that added CNVs are independent for each chromosome ( although nested events can likely be engineered through serial application of Bamgineer ) ., Second , since we did not model interchromosomal CNV events , each chromosome can be processed independently ., As such , CNV regions for each chromosome can be processed in parallel and aggregated as a final step ., S8 Fig shows the runtimes for The Cancer Genome Atlas simulation experiments ., Using a single node with 12 cores and 128 GB of RAM , each synthetic BAM took less than 3 . 5 hours to generate ., We also developed a version of Bamgineer that can be launched from sun grid engine cluster environments ., It uses python pipeline management package ruffus to parallelize tasks automatically and log runtime events ., It is highly modular and easily updatable ., If disrupted during a run , the pipeline can continue to completion without re-running previously completed intermediate steps ., Here , we introduced Bamgineer , to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping ( BAM ) file , obtained from exome and targeted sequencing experiments ., As proof of principle , we generated , from a single high coverage ( mean: 220X ) BAM file derived from a human blood sample , a series of 30 new BAM files containing a total of 1 , 693 simulated copy number variants ( on average , 56 CNVs comprising 1800Mb i . e . ~55% of the genome per tumor ) corresponding to profiles from exemplar tumors for each of 10 cancer types ., To demonstrate quantitative introduction of CNVs , we further simulated 4 levels of tumor cellularity ( 20 , 40 , 60 , 80% purity ) resulting in an additional 120 new tumor BAM files ., We validated our approach by comparing CNV calls and inferred purity values generated by an allele-specific CNV-caller ( Sequenza14 ) as well as a focused comparison of allelic variant ratios , haplotype-phasing consistency , and tumor\/normal log2 ratios for inferred CNV segments ( S1\u2013S4 Figs ) ., In every case , inferred purity values were within \u00b15% of the targeted purity; and majority of engineered CNV regions were correctly called by Sequenza ( accuracy > 94%; S1\u2013S4 Figs ) ., Allele variant ratios were also consistent with the expected values both for targeted and the other haplotypes ( Median within \u00b13% of expected value ) ., Median tumor\/normal log2 ratios were within \u00b15% of the expected values ., To demonstrate feasibility beyond exome data , we next evaluated these same metrics in a targeted 5-gene panel applied to a cell-free DNA sequencing library generated from a healthy blood donor and sequenced to >10 , 000X coverage17 To simulate concentrations of tumor-derived fragments typically encountered in cancer patients , we introduced EGFR amplifications at frequencies of 100 , 10 , 1 , 0 . 1 , and 0 . 01% ., As with the exome data , we observed highly specific shifts in allele variant ratios , log2 coverage ratios , and haplotype representation consistent with the targeted allele frequencies ., Our method also retained the bimodal DNA insert size distribution observed in the original read alignment ., However , it is worthwhile noting that , these minute shifts are currently beyond the sensitivity of existing CNV callers when applied to small , deeply covered gene panels ., Consequently , we anticipate that Bamgineer may be of value to aid develop of new methods capable of detecting copy number variants supported by a small minority of DNA fragments ., In the experiments conducted in this study , we limited ourselves to autosomes and to maximum total copy number to 4 ., Naturally , Bamgineer can readily simulate higher-level copy number states and alter sex chromosomes as well ( S10 Fig ) ., While chromosome X in diploid state ( e . g . XX in normal female ) is treated identically to autosomes , for both X and Y chromosomes beginning in haplotype state ( e . g . XY in normal male ) , the haplotype phasing step is skipped and Bamgineer samples all reads on these chromosomes independently ., For high-level amplifications , the ability of Bamgineer to faithfully retain the features of the input Bam file ( e . g . DNA fragment insert size , quality scores and so on ) , depends on the intrinsic factors such as the length of the desired CNV , mean depth of coverage and fragment length distribution of the original input BAM file ( see Materials and Methods ) ., The significance of this work in the context of CNV inference in cancer is twofold:, 1 ) users can simulate CNVs using their own locally-generated alignments so as to reflect lab- , biospecimen- , or pipeline-specific features;, 2 ) bioinformatic methods development can be better supported by ground-truth sequencing data reflecting CNVs without reliance on generated test data from suboptimal tissue or plasma specimens ., Bamgineer addresses both problems by creating standardized sequencing alignment format ( BAM files ) harbouring user-defined CNVs that can readily be used for algorithm optimization , benchmarking and other purposes ., We expect our approach to be applicable to tune algorithms for detection of subtle CNV signals such as somatic mosaicism or circulating tumor DNA ., As these subtle shifts are beyond the sensitivity of many CNV callers , we expect our tool to be of value for the development of new methods for detecting such events trained on conventional DNA sequencing data ., By providing the ability to create customized user-generated reference data , Bamgineer will prove valuable inn development and benchmarking of CNV calling and other sequence data analysis tools and pipelines ., The work presented herein can be extended in several directions ., First , Bamgineer is not able to reliably perform interchromosomal operations such as chromosomal translocation , as our focus has been on discrete regions probed by exome and targeted panels ., Second , while Bamgineer is readily applicable to whole genome sequence data , sufficient numbers of reads are required for re-pairing when introducing high-level amplifications ., As such , shallow ( 0 . 1-1X ) or conventional ( ~30X ) whole genome sequence data may only be amenable to introduction of arm-level alterations as smaller , focal targets may not contain sufficient numbers of reads to draw from to simulate high-level amplifications ., Additionally , in our current implementation , we limited the simulated copy numbers to non-overlapping regions ., Certainly , such overlapping CNV regions occur in cancer and iterative application of Bamgineer may enable introduction of complex , nested events ., Finally , introduction of compound , serially acquired CNVs may be of interest to model subclonal phylogeny developed over time in bulk tumor tissue samples ., The user provides 2 mandatory inputs to Bamgineer as command-line arguments: 1 ) a BAM file containing aligned paired-end sequencing reads ( \u201cNormal . bam\u201d ) , 2 ) a BED file containing the genome coordinates and type of CNVs ( e . g . allele-specific gain ) to introduce ( \u201cCNV regions . bed\u201d ) ., Bamgineer can be used to add four broad categories of CNVs: Balanced Copy Number Gain ( BCNG ) , Allele-specific Copy Number Gain ( ASCNG ) , Allele-specific Copy Number Loss ( ACNL ) , and Homozygous Deletion ( HD ) ., For example , consider a genotype AB at a genomic locus where A represents the major and B represents the minor allele ., Bamgineer can be applied to convert that genomic locus to any of the following copy number states:, {A , B , ABB , AAB , ABB , AABB , AAAB , ABBB , \u2026}, An optional VCF file containing phased germline calls can be provided ( phased_het . vcf ) ., If this file is not provided , Bamgineer will call germline heterozygous single nucleotide polymorphisms ( SNPs ) using the GATK HaplotypeCaller and then categorize alleles likely to be co-located on the same haplotypes using BEAGLE and population reference data from the HapMap project ., To obtain paired-reads in CNV regions of interest , we first intersect Normal . bam with the targeted regions overlapping user-defined CNV regions ( roi . bed ) ., This operation generates a new BAM file ( roi . bam ) ., Subsequently , depending on whether the CNV event is a gain or loss , the algorithms performs two separate steps as follows ., To introduce copy number gains , Bamgineer creates new read-pairs constructed from existing reads within each region of interest ., This approach thereby avoids introducing pairs that many tools would flag as molecular duplicates due to read 1 and read 2 having start and end positions identical to an existing pair ., If desired , these read pairs can be restricted to reads meeting a specific SAM flag ., For our exome experiments , we used read pairs with a SAM flag equal to 99 , 147 , 83 , or 163 , i . e . read paired , read mapped in proper pair , mate reverse ( forward ) strand , and first ( second ) in pair ., To enable support for the bimodal distribution of DNA fragment sizes in ctDNA , we removed the requirement for \u201cread mapped in proper pair\u201d and used read pairs with a SAM flag equal to 97 , 145 , 81 , or 161 ., Users considering engineering of reads supporting large inserts or intrachromosomal read pairs may also want to remove the requirement for \u201cread mapped in proper pair\u201d ., Additionally , we required that the selection of the newly paired read is within \u00b150% ( \u00b120% for ctDNA ) of the original read size ., The newly created read- pairs are provided unique read names to avoid confusion with the original input BAM file ., To enable inspection of these reads , these newly created read pairs are stored in a new BAM file , gain_re_paired_renamed . bam , prior to merging into the final engineered BAM ., Since we only consider high quality reads ( i . e . properly paired reads , primary alignments and mapping quality > 30 ) , the newly created BAM file contains fewer reads compared to the input file ( ~90\u201395% in our proof-of-principle experiments ) ., As such , at every transition we log the ratio between number of reads between the input and output files ., High-level copy number amplification ( ASCN > = 4 ) ., To achieve higher than 4 copy number amplifications , during the read\/mate pairing step , we pair each read with more than one mate read ( Fig 1 ) to generate more new reads ( to accommodate the desired copy number state ) ., Though , since as stated a small portion of newly created paired reads do not meet the inclusion criteria , we aim to create more reads than necessary in the initial phase and use the sampling to adjust them in a later phase ., For instance , to simulate copy number of 6 , in theory we need create two new read pairs for every input read ., Hence , in the initial \u201cre-pairing\u201d step we aim to create four paired reads per read ( instead of 3 ) , so that the newly created Bam file includes enough number of reads ( as a rule of thumb , we use read-paring window size of ~20% higher than theoretical value ) ., It should be noted that the maximum copy number amplification that can faithfully retain the features of the input BAM file ( e . g . DNA fragment insert size , quality scores and so on ) , depends on the intrinsic factors such as the length of the desired CNV , mean depth of coverage and fragment length distribution of the original input BAM file ., Introduction of mutations according to haplotype state ., To ensure newly constructed read-pairs match the desired haplotype , we alter the base at heterozygous SNP locations ( phased_het . vcf ) within each read according to haplotype provided by the user or inferred using the BEAGLE algorithm ., To achieve this , we iterate through the set of re-paired reads used to increase coverage ( gain_re_paired_renamed . bam ) and modify bases overlapping SNPs corresponding to the target haplotype ( phased_het . vcf ) ., We then write these reads to a new BAM file ( gain_re_paired_renamed_mutated . bam ) prior to merging into the final engineered BAM ( S9 Fig ) ., As an illustrative example consider two heterozygous SNPs , AB and CD both with allele frequencies of ~0 . 5 in the original BAM file ( i . e . approximately half of the reads supporting reference bases and the other half supporting alternate bases ., To introduce a 2-copy gain of a single haplotype , reads to be introduced must match the desired haplotype rather than the two haplotypes found in the original data ., If heterozygous AB and CD are both located on a haplotype comprised of alternative alleles , at the end of this step , 100% of the newly re-paired reads will support alternate base-pairs ( e . g . BB and DD ) ., Based on the haplotype structure provided , other haplotype combinations are possible including AA\/DD , BB\/CC , etc ., Sampling of reads to reflect desired allele fraction ., Depending on the absolute copy number desired for the for CNV gain regions , we sample the BAM files according to the desired copy number state ., We define conversion coefficient as the ratio of total reads in the created BAM from previous step ( gain_repaired_mutated . bam ) to the total reads extracted from original input file ( roi . bam ) :, \u03c1=no . ofreadsingain_re_paired_mutated . bamno . ofreadsinroi . bam, ( 1 ), According to the maximum number of absolute copy number ( ACN ) for simulated CNV gain regions ( defined by the user ) , two scenarios are conceivable as follows ., Copy number gain example ., For instance , to achieve the single copy gain ( ACN = 3 , e . g . ABB copy state ) , the file in the previous step ( gain_re_paired_renamed_mutated . bam ) , should be sub-sampled such that on average depth of coverage is half that of extracted reads from the target regions from the original input normal file ( roi . bam ) ., Thus , the final sampling rate is calculated by dividing \u00bd ( 0 . 5 ) by \u03c1 ( subsample gain_re_paired_renamed_mutated . bam such that we have half of the roi . bam depth of coverage for the region; in practice adjusted sampling rate is in the range of 0 . 51\u20130 . 59 i . e . 0 . 85 < \u03c1 < 1 for CN = 3 ) and the new reads are written to a new BAM file ( gain_re_paired_renamed_mutated_sampled . bam ) that we then merge with the original reads ( roi . bam ) to obtain gain_final . bam ., Similarly to obtain three copy number gain ( ACN = 5 ) and the desired genotype ABBBB , the gain_re_paired_renamed_mutated . bam is subsampled such that depth of coverage is 3\/2 ( 1 . 5 ) that of extracted reads from the target regions from the original input normal file ( note that as explained during the new paired-read generation step , we have already created more reads than needed ) ., To introduce CNV losses , Bamgineer removes reads from the original BAM corresponding to a specific haplotype and does not create new read pairs from existing ones ., To diminish coverage in regions of simulated copy number loss , we sub-sample the BAM files according to the desired copy number state and write these to a new file ., The conversion coefficient is defined similarly as the number of reads in loss_mutated . bam divided by number of reads in roi_loss . bam ( > ~0 . 98 ) ., Similar to CNV gains , the sampling rate is adjusted such that after the sampling , the average depth of coverage is half that of extracted reads from the target regions ( calculated by dividing 0 . 5 by conversion ratio , as the absolute copy number is 1 for loss regions ) ., Finally , we subtract the reads in CNV loss BAMs from the input . bam ( or input_sampled . bam ) and merge the results with CNV gain BAM ( gain_final . bam ) to obtain , the final output BAM file harbouring the desired copy number events ., To validate that the new paired-reads generated from the original BAM files show similar probability distribution , we used two-sided Kolmogorov\u2013Smirnov ( KS ) test ., The critical D-values where calculated for \u03b1 = 0 . 01 as follows:, D\u03b1=c ( \u03b1 ) n1+n2n1n2, ( 2 ), where coefficient c ( \u03b1 ) is obtained from Table of critical values for KS test ( https:\/\/www . webdepot . umontreal . ca\/Usagers\/angers\/MonDepotPublic\/STT3500H10\/Critical_KS . pdf; 1 . 63 for \u03b1 = 0 . 01 ) and n1 and n2 are the number of samples in each dataset ., To assess tumor allelic ratio consistency , for each SNP the theoretical allele frequency parameter was used as a reference point ( Eq 3 ) ., Median , interquartile range and mean were drawn from the observed values for each haplotype-event pair for all the SNPs ., The boxplot distribution of the allele frequencies were plotted and compared against the theoretical reference point ., To assess the segmentation accuracy , we used log2 tumor to normal depth ratios of segments normalized for mean ploidy as the metric; where the mean ploidy is ( Eqs 4 and 5 ) ., To benchmark the performance of segmentation accuracy , we used accuracy as the metrics ., Statistical analysis was performed with the functions in the R statistical computing package using RStudio ., Theoretical expected values ., The expected value for tumor allelic frequencies at heterozygous SNP loci for tumor purity level of p ( 1-p: normal contamination ) is calculated as follows:, AF ( snp ) =pAFtcnt+ ( 1\u2212p ) AFncnnpcnt+ ( 1\u2212p ) cnn, ( 3 ), where AFt and AFn represent the expected allele frequencies for tumor and normal and cnt and cnn the expected copy number for tumor and normal at specific SNP loci ., For CNV events used in this experiment AFt are ( 1\/3 or 2\/3 ) for gain and ( 1 or 0 ) for loss CNVs according to the haplotype information ( whether or not they are located on the haplotype that is affected by each CNV ) ., The expected value for the average ploidy ( \u2205^ ) is calculated as follows, \u2205^=1W ( \u2211i=1ncngwgi+\u2211j=1mcnlwlj+\u2211i=1ncnn ( W\u2212G\u2212L ), ( 4 ), , where cng , cnl , cnn , wg and wl represent the expected ploidy for gain , loss and normal regions , and the length of individual gain and loss events respectively ., W , G , and L represent total length ( in base pairs ) for gain regions , loss regions , and the entire genome ( ~ 3e9 ) ., The expected log2ratio for each segment is calculated as follows, log2ratio ( seg ) =log2 ( p\u00d7cnseg+ ( 1\u2212p ) \u00d7cnn\u2205^ ), ( 5 ), , where cnseg is the segment mean from Sequenza output , p tumor purity and \u2205^ is the average ploidy calculated above ., cnn is the copy number of copy neutral region ( i . e . 2 ) Similarity score to rank TCGA tumors ., The similarity score for specific cancer type ( c ) and sampled tumor ( t ) is calculated as follows:, S ( c , t ) =1\/ ( |2gt\u2212Gc\u2212Go|+|2lt\u2212Lc\u2212Lo|+\u03f5 ), ( 6 ), , where gt , Gc , Go represent the total number of gains for specific tumor sampled from Cancer Genome Atlas ( after merging adjacent regions and removing overlapping regions ) , median number of gains for specific tumor type , and number of gain events overlapping with GISTIC peaks respectively; lt , Lc , Lo represent the above quantities for CNV loss regions ( \u03f5 is an arbitrary small positive value to avoid zero denominator ) ., The higher score the closer is the sampled tumor to an exemplar tumor from specific cancer type .","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Somatic copy number variations ( CNVs ) play a crucial role in development of many human cancers ., The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data ., However , systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets ., To address this need , we have developed Bamgineer , a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping ( BAM ) file , with a focus on targeted and exome sequencing experiments ., As input , this tool requires a read alignment file ( BAM format ) , lists of non-overlapping genome coordinates for introduction of gains and losses ( bed file ) , and an optional file defining known haplotypes ( vcf format ) ., To improve runtime performance , Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster ., As proof-of-principle , we applied Bamgineer to a single high-coverage ( mean: 220X ) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels ( 20\u2013100% , 150 BAM files in total ) ., To demonstrate feasibility beyond exome data , we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA ( 10 , 1 , 0 . 1 and 0 . 01% ) while retaining the multimodal insert size distribution of the original data ., We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications ., The source code is freely available at http:\/\/github . com\/pughlab\/bamgineer .","summary":"We present Bamgineer , a software program to introduce user-defined , haplotype-specific copy number variants ( CNVs ) at any frequency into standard Binary Alignment Mapping ( BAM ) files ., Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest ., This approach retains biases of the original data such as local coverage , strand bias , and insert size ., Deletions are simulated by removing reads corresponding to one or both haplotypes ., In our proof-of-principle study , we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples ., We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data ., Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types .","keywords":"sequencing techniques, alleles, genetic mapping, genome analysis, copy number variation, molecular genetics, molecular biology techniques, research and analysis methods, sequence analysis, genome complexity, sequence alignment, bioinformatics, molecular biology, genetic loci, haplotypes, dna sequence analysis, heredity, database and informatics methods, genetics, biology and life sciences, genomics, dna sequencing, computational biology","toc":null} +{"Unnamed: 0":2225,"id":"journal.pcbi.1006772","year":2019,"title":"A component overlapping attribute clustering (COAC) algorithm for single-cell RNA sequencing data analysis and potential pathobiological implications","sections":"Single cell ribonucleic acid sequencing ( scRNA-seq ) offers advantages for characterization of cell types and cell-cell heterogeneities by accounting for dynamic gene expression of each cell across biomedical disciplines , such as immunology and cancer research 1 , 2 ., Recent rapid technological advances have expanded considerably the single cell analysis community , such as The Human Cell Atlas ( THCA ) 3 ., The single cell sequencing technology offers high-resolution cell-specific gene expression for potentially unraveling of the mechanism of individual cells ., The THCA project aims to describe each human cell by the expression level of approximately 20 , 000 human protein-coding genes; however , the representation of each cell is high dimensional , and the human body has trillions of cells ., Furthermore , scRNA-seq technologies have suffered from several limitations , including low mean expression levels in most genes and higher frequencies of missing data than bulk sequencing technology 4 ., Development of novel computational technologies for routine analysis of scRNA-seq data are urgently needed for advancing precision medicine 5 ., Inferring gene-gene relationships ( e . g . , regulatory networks ) from large-scale scRNA-seq profiles is limited ., Traditional approaches to gene co-expression network analysis are not suitable for scRNA-seq data due to a high degree of cell-cell variabilities ., For example , LEAP ( Lag-based Expression Association for Pseudotime-series ) is an R package for constructing gene co-expression networks using different time points at the single cell level 6 ., The Partial information decomposition ( PID ) algorithm aims to predict gene-gene regulatory relationships 7 ., Although these computational approaches are designed to infer gene co-expression networks from scRNA-seq data , they suffer from low resolution at the single-cell or single-gene levels ., In this study , we introduced a network-based approach , termed Component Overlapping Attribute Clustering ( COAC ) , to infer novel gene-gene subnetwork in individual components ( the subset of whole components ) representing multiple cell types and cell phases of scRNA-seq data ., Each gene co-expression subnetwork represents the co-expressed relationship occurring in certain cells ., The scoring function identifies co-expression networks by quantifying uncoordinated gene expression changes across the population of single cells ., We showed that gene subnetworks identified by COAC from scRNA-seq profiles were highly correlated with the survival rate of melanoma patients and drug responses in cancer cell lines , indicating a potential pathobiological application of COAC ., If broadly applied , COAC can offer a powerful tool for identifying gene-gene networks from large-scale scRNA-seq profiles in multiple diseases in the on-going development of precision medicine ., In this study , we present a novel algorithm for inferring gene-gene networks from scRNA-seq data ., Specifically , a gene-gene network represents the co-expression relationship of certain components ( genes ) , which indicates the localized ( cell subpopulation ) co-expression from large-scale scRNA-seq profiles ( Fig 1 ) ., Specifically , each gene subnetwork is represented by one or multiple feature vectors , which are learned from the scRNA-seq profile of the training set ., For the test set , each gene expression profile can be transformed to a feature value by one or several feature vectors which measure the degree of coordination of gene co-expression ., Since the feature vectors are learned from the relative expression of each gene , batch effects can be eliminated by normalization of relatively co-expressed genes ( see Methods ) ., In addition to showing that COAC can be used for batch effect elimination , we further validated COAC by illustrating three potential pathobiological applications: ( 1 ) cell type identification in two large-scale human scRNA-seq datasets ( 43 , 099 and 43 , 745 cells respectively , see Methods ) ; ( 2 ) gene subnetworks identified from melanoma patients-derived scRNA-seq data showing high correlation with survival of melanoma patients from The Cancer Genome Atlas ( TCGA ) ; ( 3 ) gene subnetworks identified from scRNA-seq profiles which can be used to predict drug sensitivity\/resistance in cancer cell lines ., We collected scRNA-seq data generated from 10x scRNA-seq protocol 7 , 8 ., In total , 14 , 032 cells extracted from peripheral blood mononuclear cells ( PBMC ) in systemic lupus erythematosus ( SLE ) patients were used as the case group and 29 , 067 cells were used as the control group ( see Methods ) ., For the case group , we used 12 , 277 cells for the training set and the remaining 1 , 755 cells for the validation set ., For the control group , we used 25 , 433 cells for the training set and 3 , 634 for the validation set ., After filtering with average correlation and average component ratio thresholds ( see Methods ) , we obtained 93 , 951 co-expression subnetworks ( gene clusters with components ) by COAC ., We transformed these co-expression gene clusters to feature vectors ., Features whose variance distribution was significantly different in the case group versus the control group were kept ( see Methods ) ., Using a t-SNE algorithm implemented in the R package-tsne 9 , we found that the single cells ( from the case group ) which were retrieved directly from the patients can be more robustly separated from the control group cells ( Fig 2B ) , comparing to the original data ( Fig 2A ) without applying COAC ., Thus , the t-SNE analysis reveals that batch effects can be significantly reduced by COAC ( Fig 2 ) ., We next turned to examine whether COAC can be used for cell type identification ., We collected a scRNA-seq dataset of 14 , 448 single cells in an IFN-\u03b2 stimulated group and 14 , 621 single cells in the control group 8 ., To remove factors caused by the stimulation conditions or experimental batch effects , we selected 13 , 003 cells in the IFN-\u03b2 stimulated group and 13 , 158 cells in the control group as the training set to obtain homogeneous feature vectors for each cell ., The remaining scRNA-seq data are used as the validation set ., We generated the gene subnetworks by COAC and transformed the subnetworks into feature vectors for individual cells ( see Methods ) ., We found that cells from IFN-\u03b2 stimulated and control groups were separated significantly ( Fig 3A ) by t-SNE 9 ., However , without applying COAC cells from the IFN-\u03b2 stimulated and control groups are uniformly distributed in the whole space ( Fig 3B ) , suggesting that components which separate IFN-\u03b2 stimulated cells from control cells were eliminated from the feature vector identified by COAC ., We further collected a scRNA-seq dataset including a total of 43 , 745 cells with well-defined cell types from a previous study 10 ., We built a training set ( 21 , 873 cells ) and a validation set ( 21 , 872 cells ) with approximately equivalent size ., In the training set , we generated co-expression subnetworks as the feature vector by COAC ., For the validation set , we grouped the total cells into five main categories as described previously 10 ., Fig 3C shows that COAC-inferred subnetworks can be used to distinguish five different cell types with high accuracy ( cell types for 83 . 05% cells have been identified correctly ) in the t-SNE analysis , indicating that COAC can identify cell types from heterogeneous scRNA-seq profiles ., We next inspected potential pathobiological applications of COAC in identifying possible prognostic biomarkers or pharmacogenomics biomarkers in cancer ., We next turned to inspect whether COAC-inferred gene co-expression subnetworks can be used as potential prognostic biomarkers in clinical samples ., We identified gene subnetworks from scRNA-seq data of melanoma patients 11 ., Using a feature selection pipeline , we filtered the original subnetworks according to the difference of means and variances between two different groups ( e . g . , malignant cells versus control cells ) to prioritize top gene co-expression subnetworks ( S1A Fig ) ., We collected the bulk gene expression data and clinical data for 458 melanoma patients from the TCGA website 12 ., Applying COAC , we identified two gene co-expression subnetworks with the highest co-expression correlation in malignant cells compared to control cells ( S1B Fig ) ., For each subnetwork , we then calculated the co-expression correlation in bulk RNA-seq profiles of melanoma patients ., Using the rank of co-expression values of melanoma patients , the top 32 patients were selected as group 1 and the tail 32 patients were selected as group 2 ., Log rank test was employed to compare the survival rate of two groups 13 ., We found that gene subnetworks identified by COAC from melanoma patients-derived scRNA-seq data can predict patient survival rate ( Fig 4A and Fig 4B ) ., KRAS , is an oncogene in multiple cancer types 14 , including menaloma 15 ., Herein we found a co-expression among KRAS , HADHB , and PSTPIP1 , can predict significantly patient survival rate ( P-value = 4 . 09\u00d710\u22125 , log rank test , Fig 4B ) ., Thus , regulation of KRAS-HADHB-PSTPIP1 may offer new a pathobiological pathway and potential biomarkers for predicting patient\u2019s survival in menaloma ., We next focused on gene co-expression subnetworks in several known melanoma-related pathways , such as the MAPK , cell-cycle , DNA damage response , and cell death pathways 16 by comparing the differences in means and variances between T cell and other cells using COAC ( see Methods ) ., For each gene co-expression subnetwork identified by COAC , we selected 32 patients who had enriched co-expression correlation and 32 patients who had lost a co-expression pattern ., We found that multiple COAC-inferred gene subnetworks predicted significantly menaloma patient survival rate ( Fig 4C\u20134F ) ., For example , we found that BRAF-PSMB3-SNRPD2 predict significant survival ( P-value = 0 . 0058 , log rank test . Fig 4C ) , revealing new potential disease pathways for BRAF melanoma ., CDKN2A , encoding cyclin-dependent kinase Inhibitor 2A , plays important roles in melanoma 17 ., Herein we found a potential regulatory subnetwork , RBM6-CDKN2A-MRPL10-MARCKSL , which is highly correlated with melanoma patients\u2019 survival rate ( P-value = 0 . 019 , log rank test . Fig 4F ) ., We identified several new potential regulatory subnetworks for TP53 as well , which is highly correlated with patients survival rate as well ( Fig 4D and 4E ) ., Multiple novel COAC-inferred gene co-expression subnetworks that are significantly associated with patient\u2019s survival rate are provided in S2 Fig . Altogether , gene regulatory subnetworks identified by COAC can shed light on new disease mechanisms uncovering possible functional consequences of known melanoma genes and offer potential prognostic biomarkers in melanoma ., COAC-inferred prognostic subnetworks should be further validated in multiple independent cohorts before clinical application ., To examine the potential pharmacogenomics application of COAC , we collected robust multi-array ( RMA ) gene expression profiles and drug response data ( IC50 The half maximal inhibitory concentration ) across 1 , 065 cell lines from the Genomics of Drug Sensitivity in Cancer ( GDSC ) database 18 ., We selected six drugs in this study based on two criteria:, ( i ) the highest variances of IC50 among over 1 , 000 cell lines , and, ( ii ) drug targets across diverse pathways: SNX-2112 ( a selective Hsp90 inhibitor ) , BX-912 ( a PDK1 inhibitor ) , Bleomycin ( induction of DNA strand breaks ) , PHA-793887 ( a pan-CDK inhibitor ) , PI-103 ( a PI3K and mTOR inhibitor ) , and WZ3105 ( also named GSK-2126458 and Omipalisib , a PI3K inhibitor ) ., We first identified gene co-expression subnetworks from melanoma patients\u2019 scRNA-seq data 11 by COAC ., The COAC-inferred subnetworks with RMA gene expression profiles of bulk cancer cell lines were then transformed to a matrix: each column of this matrix represents a feature vector and each row represents a cancer cell line from the GDSC database 18 ., We then trained an SVM regression model using the LIBSVM 19 R package with default parameters and linear kernel ( see Methods ) ., We defined cell lines whose IC50 were higher than 10 \u03bcM as drug-resistant cell lines ( or non-antitumor effects ) , and the rest as drug sensitive cell lines ( or potential antitumor effects ) ., As shown in Fig 5A\u20135F , the area under the receiver operating characteristic curves ( AUC ) ranges from 0 . 728 to 0 . 783 across 6 drugs during 10-fold cross-validation , revealing high accuracy for prediction of drug responses by COAC-inferred gene subnetworks ., To illustrate the underlying drug resistance mechanisms , we showed two subnetworks identified by COAC for SNX-2112 ( Fig 5G ) and BX-912 ( Fig 5H ) respectively ., SNX-2112 , a selective Hsp90 ( encoded by HSP90B1 ) inhibitors , has been reported to have potential antitumor effects in preclinical studies , including melanoma 20 , 21 ., We found that several HSP90B1 co-expressed genes ( such as CDC123 , LPXN , and GPX1 ) in scRNA-seq data may be involved in SNX-2112\u2019s resistance pathways ( Fig 5G ) ., GPX1 22 and LPXN 23 have been reported to play crucial roles in multiple cancer types , including melanoma ., BX-912 , a PDK1 inhibitor , has been shown to suppress tumor growth in vitro and in vivo 24 ., Fig 5H shows that several PDK1 co-expressed genes ( such as TEX264 , NCOA5 , ANP32B , and RWDD3 ) may mediate the underlying mechanisms of BX-912\u2019s responses in cancer cells ., NCOA5 25 and ANP32B 26 were reported previously in various cancer types ., Collectively , COAC-inferred gene co-expression subnetworks from individual patients\u2019 scRNA-seq data offer the potential underlying mechanisms and new biomarkers for assessment of drug responses in cancer cells ., In this study , we proposed a network-based approach to infer gene-gene relationships from large-scale scRNA-seq data ., Specifically , COAC identified novel gene-gene co-expression in individual certain components ( the subset of whole components ) representing multiple cell types and cell phases , which can overcome a high degree of cell-cell variabilities from scRNA-seq data ., We found that COAC reduced batch effects ( Fig 2 ) and identified specific cell types with high accuracy ( 83% , Fig 3C ) in two large-scale human scRNA-seq datasets ., More importantly , we showed that gene co-expression subnetworks identified by COAC from scRNA-seq data were highly corrected with patients\u2019 survival rate from TCGA data and drug responses in cancer cell lines ., In summary , COAC offers a powerful computational tool for identification of gene-gene regulatory networks from scRNA-seq data , suggesting potential applications for the development of precision medicine ., There are several improvements in COAC compared to traditional gene co-expression network analysis approaches from RNA-seq data of bulk populations ., Gene co-expression subnetwork identification by COAC is nearly unsupervised , and only a few parameters need to be determined ., Since gene overlap among co-expression subnetworks is allowed , the number of co-expression subnetworks has a higher order of magnitude than the number of genes ., Gene co-expression subnetworks identified by COAC can capture the underlying information of cell states or cell types ., In addition , gene subnetworks identified by COAC shed light on underlying disease pathways ( Fig 4 ) and offer potential pharmacogenomics biomarkers with well-defined molecular mechanisms ( Fig 5 ) ., We acknowledged several potential limitations in the current study ., First , the number of predicted gene co-expression subnetworks is huge ., It remains a daunting task to select a few biologically relevant subnetworks from a large number of COAC-predicted gene subnetworks ., Second , as COAC is a gene co-expression network analysis approach , subnetworks identified by COAC are not entirely independent ., Thus , the features used for computing similarities among cells are not strictly orthogonal ., In the future , we may improve the accuracy of COAC by integrating the human protein-protein interactome networks and additional , already known , gene-gene networks , such as pathway information 27\u201329 ., In addition , we could improve COAC further by applying deep learning approaches 30 for large-scale scRNA-seq data analysis ., In summary , we reported a novel network-based tool , COAC , for gene-gene network identification from large-scale scRNA-seq data ., COAC identifies accurately the cell types and offers potential diagnostic and pharmacogenomic biomarkers in cancer ., If broadly applied , COAC would offer a powerful tool for identifying gene-gene regulatory networks from scRNA-seq data in immunology and human diseases in the development of precision medicine ., In COAC , a subnetwork is represented by the eigenvectors of its adjacency correlation matrix ., In practice , the gene regulatory relationships represented by each subnetwork are not always unique ., Those that occur in each subnetwork represent a superposition of two or several regulatory relationships , where each has a weight in gene subnetworks shown in S3A Fig . We thereby used multi-components ( i . e . , top eigenvectors with large eigenvalues ) to represent the co-expression subnetworks ., As shown in S3B Fig , a regulatory relationship between two genes can be captured in different co-expression subnetworks ., Herein , we integrated matrix factorization 31 into the workflow of closed frequent pattern mining 32 ., Specifically , the set of closed frequent patterns contains the complete itemset information regarding these corresponding frequent patterns 32 ., Here , closed frequent pattern is defined that if two item sets appear in the same samples , only the super one is kept ., For a general gene expression matrix , to obtain a sparse distribution of genes in each latent variable , a matrix factorization method such as sparse principal component analysis ( PCA ) 33 can be chosen ., In this study , because the scRNA-seq data matrix is highly sparse , singular value decomposition ( SVD ) is chosen for matrix factorization ( i . e . , the SVD of A is given by U\u03c3V* ) ., The robust rank r is defined in the S1 Text ., Components that are greater than rank r are selected and then each attribute is treated as the linearly weighted sum of components ( Di = wi1 P1 + wi2 P2 + wi3 P3 \u2026wir Pr ) ., The projection of gene distribution i over principal component j can be expressed as DitPj\u2016Di\u2016\u2016Pj\u2016 , where \u2016Pj\u2016 = 1 ., Then , D ( i , j ) =DitPj\u2016Di\u2016\u2016Pj\u2016=DitPj\u2016Di\u2016=wij\u2016Di\u2016 and \u22121|q| ) ( Fig 5A ) ., In these patterns , however , the larger absolute value of the divergence angle was considerably deviated from 180\u00b0 , whereas this should be very close to 180\u00b0 in orixate phyllotaxis ( Fig 5B ) ., These patterns showed nonorthogonal tetrastichy , which is distinct in appearance from the orthogonal tetrastichy of orixate phyllotaxis ( Fig 5C ) ., Therefore , we concluded that the tetrastichous patterns found in simulations with DC1 are not orixate and that DC1 does not generate the orixate phyllotactic pattern at any parameter setting ., The absence of the occurrence of normal orixate phyllotaxis , the divergence angles of which are exactly \u00b1180\u00b0 and \u00b190\u00b0 , in the context of DC1 can be explained analytically ( S1 Text ) ., Next , we examined whether modification of DC1 could enable it to produce orixate phyllotaxis ., In an attempt to modify DC1 , we focused on the inhibitory power of each leaf primordium against new primordium formation\u2014which is assumed to be constant in DC models but may possibly change during leaf development\u2014and expanded DC1 by introducing age-dependent , sigmoidal changes in the inhibitory power ., In this expanded version of DC1 ( EDC1 ) , the inhibitory field strength I ( \u03b8 ) was redefined as the summation of the products of the age-dependent change in the inhibitory power and the distance-dependent decline of its effect:, I ( \u03b8 ) \u2261\u2211m=1n\u22121{k ( dm ( \u03b8 ) ) \u2212\u03b7F ( n\u2212m ) } ., ( 9 ), F is defined as:, F ( \u0394t ) \u226111+e\u2212a ( \u0394t\u2212b ) ,, ( 10 ), where parameters a and b are constants that represent the rate and timing of the age-dependent changes in the inhibitory power , respectively ., Under this equation , in an age-dependent manner , the inhibitory power increases at a>0 and decreases at a<0 ., In the present study , \u03b7 was fixed at 2 for EDC1 ., Prior to computer simulation analysis with EDC1 , we searched for parameters of EDC1 that can fit the requirements of normal orixate phyllotaxis ., When the normal pattern of orixate phyllotaxis is stably maintained , a rectangular coordinate system with the origin at the center of the shoot apex can be set such that all primordia lie on the coordinate axes , and every fourth primordium is located on the same axis in the same direction , i . e . , the position of any primordium ( mth primordium ) can be expressed as ( rm cos \u03b8m\u22124i , rm sin \u03b8m\u22124i ) for integers i ., Under this condition , we considered whether a new primordium ( nth primordium ) is produced at the position ( R0 cos \u03b8n\u22124i , R0 sin \u03b8n\u22124i ) , to keep the normal orixate phyllotactic pattern ., In EDC1 , as in DC1 , new primordium formation at ( R0 cos \u03b8n\u22124i , R0 sin \u03b8n\u22124i ) implies that the inhibitory field strength I ( \u03b8 ) on the circle M has a minimum at \u03b8n\u22124i ., For this reason , we first attempted to solve the following equation:, dI ( \u03b8 ) d\u03b8|\u03b8\u2212\u03b8n\u22124i=0=0 ., ( 11 ), This equation was numerically solved under two geometrical situations of primordia: the divergence angle between the newly arising primordium and the last primordium is \u00b190\u00b0 ( situation 1 ) or \u00b1180\u00b0 ( situation 2 ) ( S1A Fig ) ., The solutions obtained identified parameter sets that satisfied the above equation under both these two situations ( Fig 6A , S1B Fig ) ., The calculation of I ( \u03b8 ) using the identified parameter sets showed that I ( \u03b8 ) has a local and global minimum around \u03b8n\u22124i with large values of G , such as 0 . 5 or 1 , while it has a local maximum instead of a minimum around \u03b8n\u22124i with small G values , such as 0 . 1 ( S1C Fig ) ., This result indicates the possibility that EDC1 can form orixate phyllotaxis as a stable pattern under a particular parameter setting with large G values ., We conducted computer simulations using EDC1 over broad ranges of parameters and found that EDC1 could generate tetrastichous alternate patterns in addition to distichous and spiral patterns ( Fig 6B ) ., The tetrastichous patterns included orthogonal tetrastichous ones with a four-cycle divergence angle change of approximately 180\u00b0 , 90\u00b0 , \u2212180\u00b0 , and \u221290\u00b0 , which can be regarded as orixate phyllotaxis ( Fig 6C , S2 Fig ) ., Under the conditions of assuming an age-dependent increase in the inhibitory power ( a>0 ) , these orixate patterns were formed within a rather narrow parameter range of G = 0 . 5~1 , a = 1~2 , and b = 4~9 around the parameter settings that were determined by numerical solution , to fit the requirements for the stable maintenance of normal orixate phyllotaxis ( Fig 6B and 6C ) ., When assuming an age-dependent decrease in the inhibitory power ( a<0 ) , orixate phyllotaxis appeared at a point of G = 0 . 1 , a\u2248\u221210 , and b\u22483 . 5 ( Fig 6B and 6C ) ., These values of a and b represent a very sharp drop in the inhibitory power at the primordial age corresponding to approximately three plastochron units ., Around this parameter condition , there were no numerical solutions for normal orixate phyllotaxis; however , patterns that were substantially orixate , although they were not completely normal , could be established ., The orixate patterns that were generated under the conditions in which the inhibitory power increased and decreased were visually characterized by sparse primordia around the small meristem and dense primordia around the large meristem , respectively ( Fig 6C ) ., In the results of computer simulations with EDC1 , besides the orixate patterns , we also found peculiar patterns with an x-cycle change in the divergence angle consisting of 180\u00b0 followed by an ( x\u22121 ) -times repeat of 0\u00b0 ( S3 Fig ) ., Such patterns were generated when all the parameters a , b , and G were set to relatively large values and are displayed as periodic distribution of black regions in the upper right area of the middle and right panels of Fig 6B ., In these patterns , as b is increased , the number of repetition times of 0\u00b0 is increased , resulting in the shift from x-cycle to ( x+1 ) -cycle ., This shift is mediated by the occurrence of spiral patterns with a small divergence angle , and the transitions from x-cycle to spiral and from spiral to ( x+1 ) -cycle takes place suddenly in response to a slight change of b ( S3 Fig ) ., DC2 , as DC1 , is an inhibitory field model but is more generalized than DC1 17 ., Unlike DC1 , DC2 does not assume one-by-one formation of primordia at a constant time interval and thus does not exclude whorled phyllotactic patterning ., Indeed , DC2 was shown to produce all major patterns of either alternate or whorled phyllotaxis depending on parameter conditions 17 ., To test whether DC2 can generate orixate phyllotactic patterns , we carried out extensive computer simulation analyses using this model ., Our computer simulations confirmed that major phyllotactic patterns , such as distichous , Fibonacci spiral , Lucas spiral , decussate , and tricussate patterns , are formed as stable patterns in wide ranges of parameters , and also showed formation of tetrastichous alternate patterns with a four-cycle change of the divergence angle at N = 1 and \u0393\u22481 . 8 when initiated by placing a single primordium at the SAM periphery ( Fig 7A ) ., The possible inclusion of orixate phyllotaxis in these tetrastichous four-cycle patterns was carefully examined based on the ratio of plastochron times and the ratio of absolute values of divergence angles , which should be much larger than 0 and close to 0 . 5 , respectively , in orixate phyllotaxis ., Although all the tetrastichous four-cycle patterns detected here had a divergence angle ratio near 0 . 5 , their ratios of plastochron times were too small to be regarded as orixate phyllotaxis , and the overall characters indicated that they are rather similar to decussate phyllotaxis ( Fig 7B and 7C ) ., These results led to the conclusion that the DC2 system does not generate orixate phyllotaxis under any parameter conditions ., Similar to the approach used for DC1 , we expanded DC2 by introducing primordial age-dependent changes in the inhibitory power ., In this expanded version of DC2 ( EDC2 ) , the inhibitory field strength I ( \u03b8 ) was redefined as the summation of the products of the age-dependent change in the inhibitory power and the distance-dependent decrease of its effect:, I ( \u03b8 ) \u2261\u2211m=1n\u22121{E ( dm ( \u03b8 ) d0 ) F ( tm ) } ,, ( 12 ), where F is a function expressing a temporal change in the inhibitory power , defined as:, F ( t ) \u226111+e\u2212A ( t\u2212B ) ., ( 13 ), Computer simulations using EDC2 were first conducted under a wide range of combinations of A and B at three different settings of \u0393 ( \u0393 = 1 , 2 , or 3 ) and fixed conditions for \u03b1 and N ( \u03b1 = 1 , N = 1\/3 ) ( S4 Fig ) ., In this analysis , tetrastichous four-cycle patterns were formed within the parameter window where A was 3\u20137 and B was 0 . 4\u20131 , which represents a late and slow increase in the inhibitory power during primordium development ( Fig 8A ) ., Further analysis performed by changing \u0393 , \u03b1 , and N showed that small values of \u03b1 , which indicate that the distance-dependent decrease in the inhibitory effect is gradual , and large values of \u0393 , which indicate that the maximum inhibition range of a primordium is large , are also important for the formation of tetrastichous four-cycle patterns ( Fig 9 , S5 Fig ) ., All of these four-cycle patterns were found to be almost orthogonal and to have a sufficiently large ratio of successive plastochron times , thus fitting the criterion of orixate phyllotaxis ( Fig 8B , S7 Fig ) ., Furthermore , the plots of these patterns lied within the cloud of the data points of real orixate phyllotaxis , and therefore we concluded that they are orixate ., A typical example of such orixate patterns was obtained by simulation using the parameters , A = 4 . 8 , B = 0 . 72 , \u0393 = 2 . 8 , N = 1\/3 , and \u03b1 = 1 , and is presented as a contour map of the inhibitory field strength in Fig 10A , which clearly depicts orixate phyllotactic patterning ., Under this parameter condition , the inhibitory field strength on the SAM periphery was calculated to have a minimum close to the threshold at 0\u00b0 at the time of new primordium formation when the preceding primordia were placed at 0\u00b0 , 180\u00b0 , and \u00b190\u00b0 ( S8 Fig ) ., This landscape of the inhibitory field stabilizes the orixate arrangement of primordia ., In summary , our analysis demonstrated that orixate phyllotaxis comes into existence in the EDC2 system when the inhibitory power of each primordium increases at a late stage and slowly to a large maximum and when its effect decreases gradually with distance ., In the orixate phyllotactic patterns generated by EDC2 , the plastochron time oscillated between two values together with a cyclic change in the divergence angle: the longer plastochron was observed for the adjacent pairs of primordia with a divergence angle of \u00b190\u00b0 and the shorter plastochron was recorded for the opposite pairs with a divergence angle of \u00b1180\u00b0 ( Fig 10B , S1 Movie ) ., This relationship between the plastochron and the divergence angle agreed with the real linkage observed for the plastochron ratios and divergence angles in the winter buds of O . japonica ( Fig 4E ) ., Based on a comprehensive survey of the results of the computer simulations performed using EDC2 , we examined the distribution of various phyllotactic patterns and the possible relationships between them in the parameter space of EDC2 ( Figs 8A and 9 , S4 , S5 , S9 and S10 Figs ) ., Major phyllotactic patterns , such as the distichous , Fibonacci spiral , and decussate patterns , occupied large areas in the parameter space , and the Lucas spiral pattern occupied some areas ., Depending on the initial condition , the tricussate pattern also took a considerable fraction of the space ., In the parameter space , the distichous pattern adjoined the Fibonacci spiral pattern , while the Fibonacci spiral adjoined the distichous , Lucas spiral , decussate , and tricussate patterns ., The regions where the orixate pattern was generated were located next to the regions of the decussate , Fibonacci spiral , Lucas spiral , and\/or two-cycle alternate patterns ., This positional relationship suggests that orixate phyllotaxis is more closely related to the decussate and spiral patterns than it is to the distichous pattern ., The two-cycle patterns formed in a narrow parameter space next to the region of orixate phyllotaxis and had a divergence angle ratio of approximately 0 . 55 and a plastochron time ratio of approximately 0 . 2 ( Fig 8B , S6A Fig ) ; thus , they are similar to semi-decussate phyllotaxis , which is an alternate arrangement characterized by the oscillation of the divergence angle between 180\u00b0 and 90\u00b0 ( S6B Fig ) ., These semi-decussate-like patterns were not observed in the computer simulations performed using DC2 ( Fig 7B and 7C ) ; rather , they were produced only after its expansion into EDC2 ., The overall distributions of major phyllotactic patterns in the parameter space were compared between DC2 and EDC2 using color plots drawn from the results of simulations conducted for EDC2 with various settings of the inhibition range parameter \u0393 and the inhibitory power change parameter A ( Fig 9 ) ., In these simulations , large A values accelerated the age-dependent increase in the inhibitory power of each primordium; if A is sufficiently large , the inhibitory power is almost constant during primordium development and the EDC2 system is almost the same as DC2 ., Therefore , the colors along the top side of each panel of Fig 9 , where A was set to 20 , which is a high value , show the phyllotactic pattern distribution against \u0393 in DC2 , while the colors over the two-dimensional panel show the phyllotactic pattern distribution against \u0393 and A in EDC2 ., The order of distribution of the distichous , Fibonacci spiral , and decussate patterns was unaffected by decreasing A and , thus , did not differ between DC2 and EDC2 ., As reported in the previous study of DC2 17 , on the top side of Fig 9 , the stable pattern changed from distichous to Fibonacci spiral , and then turned into decussate as \u0393 decreased ., In the parameter space of EDC2 , this order of distribution of major phyllotactic patterns was not affected much by decreasing A to moderate values; however , when A was further decreased , the orixate pattern appeared in the region of the Fibonacci spiral ( Fig 9 , S10 Fig ) ., As A decreased , the range of \u0393 that produced a Fibonacci spiral became wider and the transition zone between the distichous and Fibonacci spiral patterns , where the divergence angle gradually changed from 180\u00b0 to 137 . 5\u00b0 , became narrower ( Fig 9 ) ., This result indicated that Fibonacci spiral phyllotaxis is more dominant when assuming a delay in the primordial age-dependent increase in the inhibitory power ., Orixate phyllotaxis is a special kind of alternate phyllotaxis with orthogonal tetrastichy resulting from a four-cycle change in the divergence angle in the order of approximately 180\u00b0 , 90\u00b0 , \u2212180\u00b0 ( 180\u00b0 ) , and \u221290\u00b0 ( 270\u00b0 ) ; this phyllotaxis occurs in a few plant species across distant taxa 29\u201332 ., In the present study , we investigated a possible theoretical framework behind this minor but interesting phyllotaxis on the basis of the inhibitory field models proposed by Douady and Couder 16 , 17 , which were shown to give a simple and robust explanation for the self-organization process of major phyllotactic patterns by assuming that each existing leaf primordium emits a constant level of inhibitory power against the formation of a new primordium and that its effect decreases with distance from the primordium ., Re-examination of the original versions of Douady and Couder\u2019s models ( DC1 and DC2 ) via exhaustive computer simulations revealed that they do not generate the orixate pattern at any parameter condition ., The inability of DC models to produce orixate phyllotaxis prompted us to expand them to account for a more comprehensive generation of phyllotactic patterns ., In an attempt to modify DC models , we introduced a temporal change in the inhibitory power during primordium development , instead of using a constant inhibitory power ., Such changes of the inhibitory power were partly considered in several previous studies ., Douady and Couder assessed the effects of \u201cthe growth of the element\u2019s size\u201d , which is equivalent to the primordial age-dependent increase in the inhibitory power and found that it stabilizes whorled phyllotactic patterns 17 ., Smith et al . assumed in their mathematical model that the inhibitory power of each primordium decays exponentially with age and stated that this decay promoted phyllotactic pattern formation de novo , as well as pattern transition , and allowed the maintenance of patterns for wider ranges of parameters 9 ., A DC1-based model equipped with a primordial age-dependent change in the inhibitory power was also used to investigate floral organ arrangement 35 , 36 ., In these studies , however , temporal changes in the inhibitory power were examined under limited ranges of parameters focusing on particular aspects of phyllotactic patterning , and the possibility of the gener","headings":"Introduction, Material, methods, and models, Results, Discussion","abstract":"Plant leaves are arranged around the stem in a beautiful geometry that is called phyllotaxis ., In the majority of plants , phyllotaxis exhibits a distichous , Fibonacci spiral , decussate , or tricussate pattern ., To explain the regularity and limited variety of phyllotactic patterns , many theoretical models have been proposed , mostly based on the notion that a repulsive interaction between leaf primordia determines the position of primordium initiation ., Among them , particularly notable are the two models of Douady and Couder ( alternate-specific form , DC1; more generalized form , DC2 ) , the key assumptions of which are that each leaf primordium emits a constant power that inhibits new primordium formation and that this inhibitory effect decreases with distance ., It was previously demonstrated by computer simulations that any major type of phyllotaxis can occur as a self-organizing stable pattern in the framework of DC models ., However , several phyllotactic types remain unaddressed ., An interesting example is orixate phyllotaxis , which has a tetrastichous alternate pattern with periodic repetition of a sequence of different divergence angles: 180\u00b0 , 90\u00b0 , \u2212180\u00b0 , and \u221290\u00b0 ., Although the term orixate phyllotaxis was derived from Orixa japonica , this type is observed in several distant taxa , suggesting that it may reflect some aspects of a common mechanism of phyllotactic patterning ., Here we examined DC models regarding the ability to produce orixate phyllotaxis and found that model expansion via the introduction of primordial age-dependent changes of the inhibitory power is absolutely necessary for the establishment of orixate phyllotaxis ., The orixate patterns generated by the expanded version of DC2 ( EDC2 ) were shown to share morphological details with real orixate phyllotaxis ., Furthermore , the simulation results obtained using EDC2 fitted better the natural distribution of phyllotactic patterns than did those obtained using the previous models ., Our findings imply that changing the inhibitory power is generally an important component of the phyllotactic patterning mechanism .","summary":"Phyllotaxis , the beautiful geometry of plant-leaf arrangement around the stem , has long attracted the attention of researchers of biological-pattern formation ., Many mathematical models , as typified by those of Douady and Couder ( alternate-specific form , DC1; more generalized form , DC2 ) , have been proposed for phyllotactic patterning , mostly based on the notion that a repulsive interaction between leaf primordia spatially regulates primordium initiation ., In the framework of DC models , which assume that each primordium emits a constant power that inhibits new primordium formation and that this inhibitory effect decreases with distance , the major types ( but not all types ) of phyllotaxis can occur as stable patterns ., Orixate phyllotaxis , which has a tetrastichous alternate pattern with a four-cycle sequence of the divergence angle , is an interesting example of an unaddressed phyllotaxis type ., Here , we examined DC models regarding the ability to produce orixate phyllotaxis and found that model expansion by introducing primordial age-dependent changes of the inhibitory power is absolutely necessary for the establishment of orixate phyllotaxis ., The simulation results obtained using the expanded version of DC2 ( EDC2 ) fitted well the natural distribution of phyllotactic patterns ., Our findings imply that changing the inhibitory power is generally an important component of the phyllotactic patterning mechanism .","keywords":"plant anatomy, buds, computerized simulations, mathematical models, hormones, plant science, plant hormones, microscopy, plants, flowering plants, research and analysis methods, computer and information sciences, mathematical and statistical techniques, scanning electron microscopy, leaves, biochemistry, plant biochemistry, computer modeling, eukaryota, electron microscopy, biology and life sciences, auxins, organisms","toc":null} +{"Unnamed: 0":1388,"id":"journal.pcbi.1006859","year":2019,"title":"Conformational ensemble of native \u03b1-synuclein in solution as determined by short-distance crosslinking constraint-guided discrete molecular dynamics simulations","sections":"\u03b1-Synuclein is involved in the pathogenesis of misfolding-related neurodegenerative diseases , in particular Parkinson\u2019s disease 1 , 2 ., A misfolding event leads to the formation of oligomers which are believed to result in cell toxicity and which eventually lead to the death of neuronal cells 3 ., \u03b1-Synuclein is thought to interact with lipid vesicles in vivo 4 and the toxicity is thought to be mediated via membrane disruption by misfolded oligomers 5 ., Moreover , a prion-like spread of the pathology via the conversion of native \u03b1-synuclein molecules by toxic oligomers has been suggested 6 ., Native \u03b1-synuclein is considered to be an intrinsically disordered protein , although there is evidence that some globular structure exists in solution , which may serve as a basis for understanding the mis-folding and oligomerization pathways ., A number of biophysical methods , such as NMR , EPR , FRET , and SAXS\u2014in combination with computational methods\u2014have been applied to the study of intrinsically disordered proteins , including the structure of \u03b1-synuclein in solution 7\u201310 ., In all of these cases , even a limited amount of experimental structural data was helpful in the characterization of the conformational ensemble of \u03b1-synuclein in solution ., Recently , we developed a method for determination of protein structures , termed short-distance crosslinking constraint-guided discrete molecular dynamics simulations ( CL-DMD ) , where the folding process is influenced by short-distance experimental constraints which are incorporated into the DMD force field 11 ., Adding constraints to DMD simulations results in a reduction of the possible conformational space and allows the software to achieve protein folding on a practical time scale ., We have tested this approach on well-structured proteins including myoglobin and FKBP and have observed clear separation of low-energy clusters and a narrow distribution of structures within the clusters ., The conformational flexibility of intrinsically disordered proteins , such as \u03b1-synuclein , brings additional challenges to the computational process 12 ., In cases like this , proteins exist as a collection of inter-converting conformational states , and crosslinking data represents multiple conformations of a protein rather than a single structure ., In addition , recent research indicates that traditional force fields with their parametrization are not ideal for providing an accurate description of disordered proteins , and tend to produce more compact structures 13 ., Recently research has been focused on improving traditional state-of-the-art force fields and their ability to predict structures of disordered proteins without losing their accuracy for structured proteins 14 ., In this work we use a Medusa force field 15\u201317 that is utilized in DMD simulations is discretized to mimic continuous potentials ., DMD uses a united atom representation for the protein where all heavy atoms and polar hydrogens are explicitly accounted ., The solvation energy is described in terms of the discretized Lazaridis-Karplus implicit solvation model 18 and inter-atomic interactions , such van der Waals and electrostatics , are approximated by a series of multistep square-well potentials ., Other additional potentials , such as pair-wise distance constraints 19 , 20 and solvent accessibility information 21 , 22 can also be readily integrated ., During CL-DMD simulations there are no continuous forces that would drive the atoms to satisfy all constraints , rather generating conformational ensembles , which satisfy an optimal number of the constraints are generated ., This , to some degree , naturally resolves conflicting experimental constraints ., Thus , CL-DMD simulations are a viable computational platform for the structural analysis of intrinsically disordered proteins 23 in general , and \u03b1-synuclein in particular ., Here , we used the CL-DMD approach 11 to determine conformational ensembles of the \u03b1-synuclein protein in solution ., During this process , \u03b1-synuclein was crosslinked with a panel of short-range crosslinkers , crosslinked proteins were enzymatically digested , crosslinked residues were determined by LC-MS\/MS analysis , and the resulting data on inter-residues distances were introduced into DMD force field as external constraints ., To experimentally validate the predicted structures , we analyzed \u03b1-synuclein using surface modification ( SM ) , circular dichroism ( CD ) , hydrogen-deuterium exchange ( HDX ) , and long-distance crosslinking ( LD-CL ) ., \u03b1-Synuclein was crosslinked with a panel of short-range reagents azido-benzoic acid succinimide ( ABAS-12C6\/13C6 ) , succinimidyl 4 , 4-azipentanoate ( SDA ) , 24 triazidotriazirine ( TATA-12C3\/13C3 ) , 25 and 1-ethyl-3- ( 3-dimethylaminopropyl ) carbodiimide ( EDC ) 26 ., ABAS and SDA are hetero-bifunctional amino group-reactive and photo-reactive reagents , TATA is a homo-bifunctional photo-reactive reagent , and EDC is a zero-length carboxyl-to-amino group crosslinker ., Crosslinked proteins were digested with proteinase K or trypsin proteolytic enzymes , and the digest was analyzed by LC-MS\/MS to identify crosslinked peptides ( S1 Table ) ., We used an equimolar mixture of 14N- and 15N-metabolically labeled \u03b1-synuclein to exclude potential inter-protein crosslinks from the analysis and to facilitate the assignment of crosslinked residues based on the number of nitrogen atoms in the crosslinked peptides and the MS\/MS fragments 27 ., The distances between crosslinked residues are based on the length of the crosslinker reagents , and were introduced as constraints into the DMD potentials ( see section below and 11 for additional details ) ., A total of 30 crosslinking constraints were used in these DMD simulations ( S1 Table ) ., In addition , \u03b1-synuclein was characterized by top-down ECD- and UVPD-FTMS HDX and CD to determine the secondary-structure content ( Fig 1 and S1 Fig ) ., Quantitative differential surface modification experiments were performed with and without 8 M urea to determine the characteristics of the residues as exposed or buried ( S2 Table ) ., LD-CL was used to estimate the overall protein topology ( S3 Table ) ., \u03b1-Synuclein was expressed using a pET21a vector provided by Dr . Carol Ladner of the University of Alberta ., The protein was expressed in E . coli BL21 ( DE3 ) bacteria and was purified as in 25 ., Briefly , the protein was overexpressed with 1 mM IPTG in 1L LB cultures of BL21DE3 E . coli for 4 hours at 30\u00b0C ., Cells were lysed with a French press and the lysate was heated at 70\u00b0C for 10 minutes and then centrifuged at 14000 g for 30 minutes ., The soluble fraction was precipitated for 1 hour in 2 . 1 M ( NH4 ) 2SO4 ., \u03b1-Synuclein was then purified by fast protein liquid chromatography on a Mono Q 4 . 6\/100 SAX column ( GE Life Science ) , using a gradient from 50\u2013500 mM NaCl , in 50 mM Tris at pH 8 . 0 ., Elution fractions containing \u03b1-synuclein were further purified by size exclusion on a Superdex 200 30\/100 GL column ( GE Life Science ) ., For the expression of metabolically labeled 15N \u03b1-synuclein , 1L of M9 Minimal media was prepared with 1 g\/L 15NH4Cl ( Cambridge Isotopes ) as the sole source of nitrogen ., BL21 ( DE3 ) cells were grown overnight in 50mL of this media , then seeded into 1 L , grown to an A600 of approximately 0 . 8 , and induced using 1 mM IPTG ., After expression overnight at 30\u00b0C , 15N \u03b1-synuclein was purified as described above ., Unlabeled and 15N metabolically-labeled \u03b1-synuclein were mixed in a 1:1 ratio at a concentration of 20 \u03bcM in 50 mM Na2HPO4 and incubated overnight at room temperature prior to crosslinking ., \u03b1-synuclein aliquots of 38 \u03bcL were then crosslinked using either 1 mM of the ABAS-12C6\/13C6 crosslinker ( Creative Molecules ) or 30 mM of the EDC crosslinker ., ABAS crosslinking reaction mixtures were incubated for 10 minutes in the dark to allow the NHS-ester reaction to take place , followed by 10 minutes of UV irradiation under a 25 W UV lamp ( Model UVGL-58 Mineralight lamp , UVG ) with a 254 nm wavelength filter ., ABAS reaction mixtures were quenched with 10 mM ammonia bicarbonate ., EDC reaction mixtures were incubated for 20 minutes ., A portion of each crosslinking reaction mixture was checked by SDS-PAGE gel to see the extent of potential intermolecular crosslinked products ., Aliquots were subsequently split and digested with either trypsin or proteinase K at an enzyme: protein ratio of 1:10 ., Digestion was quenched using a final concentration of 10 mM AEBSF ( ApexBio ) , and samples were then acidified with formic acid for analysis by mass spectrometry ., For TATA , 100 \u03bcM synuclein in 50 mM sodium phosphate buffer was reacted with 0 . 5 mM TATA-12C3\/13C3 ( Creative Molecules ) ., Samples were incubated for 5 minutes with 254 nm UV light from the same lamp as was used for the ABAS reactions ., Samples were then split and digested with either proteinase K or trypsin at an enzyme: protein ratio of 1:20 ., For SDA reactions , 20 \u03bcL of 1mg\/mL \u03b1-synuclein was crosslinked using 1 mM SDA ( Creative Molecules , Inc . ) ., Aliquots were incubated for 15 minutes in the dark prior to incubation under the same UV lamp as used previously for ABAS reactions but changing the wavelength to 366 nm ., Samples were then run on an SDS-PAGE gel , and bands representing the \u03b1-synuclein monomer were excised and subjected to in-gel trypsin digestion ., After in-gel digestion , samples were acidified using formic acid prior to mass spectrometric analysis ., The CBDPS crosslinking reaction mixture consisted of 238 \u03bcL of 50 \u03bcM \u03b1-synuclein , with 0 . 12 mM CBDPS ., Samples were split and digested with either proteinase K or trypsin at an enzyme: protein ratio of 1:10 ., Digests were quenched with 10 mM AEBSF and samples were enriched using monomeric avidin beads ( Thermo Scientific ) ., Enriched samples were acidified for mass spectrometric analysis using formic acid ., Mass spectrometric analysis was then performed using a nano-HPLC system ( Easy-nLC II , ThermoFisher Scientific ) , coupled to the ESI-source of an LTQ Orbitrap Velos or Fusion ( ThermoFisher Scientific ) , using conditions described previously 11 ., Briefly , samples were injected onto a 100 \u03bcm ID , 360 \u03bcm OD trap column packed with Magic C18AQ ( Bruker-Michrom , Auburn , CA ) , 100 \u00c5 , 5 \u03bcm pore size ( prepared in-house ) and desalted by washing with Solvent A ( 2% acetonitrile:98% water , both 0 . 1% formic acid ( FA ) ) ., Peptides were separated with a 60-min gradient ( 0\u201360 min: 4\u201340% solvent B ( 90% acetonitrile , 10% water , 0 . 1% FA ) , 60\u201362 min: 40\u201380% B , 62\u201370 min: 80% B ) , on a 75 \u03bcm ID , 360 \u03bcm OD analytical column packed with Magic C18AQ 100 \u00c5 , 5 \u03bcm pore size ( prepared in-house ) , with IntegraFrit ( New Objective Inc . , Woburn , MA ) and equilibrated with solvent A . MS data were acquired using a data-dependent method ., The data dependent acquisition also utilized dynamic exclusion , with an exclusion window of 10 ppm and exclusion duration of 60 seconds ., MS and MS\/MS events used 60000- and 30000-resolution FTMS scans , respectively , with a scan range of m\/z 400\u20132000 in the MS scan ., For MS\/MS , the CID collision energy was set to 35% ., Data were analyzed using the 14N15N DXMSMS Match program from the ICC-CLASS software package 27 ., SDA crosslinking data was analyzed using Kojak 28 and DXMSMS Match ., For scoring and assignment of the MS\/MS spectra , b- and y-ions were primarily used , with additional confirmation from CID-cleavage of the crosslinker where this was available ., Chemical surface modification with pyridine carboxylic acid N-hydroxysuccinimide ester ( PCAS ) ( Creative Molecules ) was performed as described previously 29 ., Briefly , \u03b1-synuclein was prepared at 50 \u03bcM in 8 M urea in PBS , pH 7 . 4 ( unfolded state ) , or in only PBS ( folded state ) ., Either the light or the heavy form of the 13C-isotopically-coded reagent ( PCAS-12C6 or PCAS-13C6 ) was then added to give a final concentration of 10 mM ., Reaction mixtures were incubated for 30 minutes and quenched with 50 mM ammonium bicarbonate ., Samples were then mixed at a 1:1 ratio , combining folded ( PCAS-12C ) with unfolded ( PCAS-13C ) samples , as well as in reverse as a control ., Samples were acidified with 150 mM acetic acid and digested with pepsin at a 20:1 protein: enzyme ratio overnight at 37\u00b0C ., After digestion samples were prepared for mass spectrometry analysis using C18 zip-tips ( Millipore ) ., Zip-tips were equilibrated with 30 \u03bcL 0 . 1% TFA , sample was introduced , then washed with 30 \u03bcL 0 . 1% TFA and eluted with 2 \u03bcL of 0 . 1% formic acid\/50% acetonitrile ., Samples were analyzed by LC-MS\/MS as described above ., Top-down ECD-FTMS hydrogen\/deuterium exchange was performed as described previously 30 ., Briefly , protein solution and D2O from separate syringes were continuously mixed in a 1:4 ratio ( 80% D2O final ) via a three-way tee which was connected to a 100 \u03bcm x 5 cm capillary , providing a labeling time of 2 s ., The outflow from this capillary was mixed with a quenching solution containing 0 . 4% formic acid in 80% D2O from the third syringe via a second three-way tee and injected into a Bruker 12 T Apex-Qe hybrid Fourier Transform mass spectrometer , equipped with an Apollo II electrospray source ., In-cell ECD fragmentation experiments were performed using a cathode filament current of 1 . 2 A and a grid potential of 12 V . Approximately 800 scans were accumulated over the m\/z range 200\u20132000 , corresponding to an acquisition time of approximately 20 minutes for each ECD spectrum ., Deuteration levels of the amino acid residues were determined using the HDX Match program 31 ( S1 Fig ) ., Synuclein UVPD spectra were collected on a Thermo Scientific Orbitrap Fusion Lumos Tribrid mass spectrometer equipped with a 2 . 5-kHz repetition rate ( 0 . 4 ms\/pulse ) 213 nm Nd:YAG ( neodymium-doped yttrium aluminum garnet ) laser ( CryLas GmbH ) with pulse energy of 1 . 5 \u00b1 0 . 2 \u03bcJ\/pulse and output power of 3 . 75 \u00b1 0 . 5 mW for UVPD ., The solution was exchanged with deuterium using the same three-way tee setup , although in this case a 50 \u03bcm x 7cm capillary provided a labeling time of ~1s ., Spectra were acquired for 8 or 12 ms , and resultant spectra were averaged and used for the data analysis with the HDX Match program as above ., CD spectra were recorded on Jasco J-715 spectrometer under a stream of nitrogen ., The content of \u03b1-helical and \u03b2-sheet structures was calculated using BeStSel web server 32 ., Crosslink guided discrete molecular dynamics ( CL-DMD ) simulations were performed according to the protocol described in our previous work 11 ., Briefly , discrete molecular dynamics ( DMD ) is a physically based and computationally efficient approach for molecular dynamics simulations of biological systems 16 , 17 ., In DMD , continuous inter-atom interaction potentials are replaced with their discretized analogs , allowing the representation of interactions in the system as a series of collision events where atoms instantaneously exchange their momenta according to conservation laws ., This approach significantly optimizes computations by replacing integration of the motion equations at fixed time steps with the solution of conservation-law equations at event-based time points 33 ., In order to incorporate experimental data for inter-residue distances between corresponding atoms into DMD simulations , we introduced a series of well-shape potentials that energetically penalize atoms whose interatomic distance do not satisfy experimentally determined inter-atom proximity constraints ., The widths of these potentials are determined by the cross-linker spacer length and side chain flexibility 11 ., Starting from the completely unfolded structure of \u03b1-synuclein molecule , we performed an all-atom Replica Exchange ( REX ) 34 simulations of the protein where 24 replicas with temperatures equally distributed in the range from 0 . 375 to 0 . 605 kcal\/ ( mol kB ) , are run for 6 x 106 DMD time steps ( S2 Fig ) ., The simulation temperature of each of the replicates periodically exchanged according to the Metropolis algorithm allowing the protein to overcome local energetic barriers and increase conformational sampling ., During the simulations we monitored the convergence of the system energy distribution specific heat curve , calculated by Weighted Histogram Analysis Method ( WHAM ) 35 which was used as the indicator of system equilibration ., We discarded the first 2 x 106-time steps of system equilibration during the analysis ., Next , we ranked all of the structures among all of the trajectories , and selected the ones with lowest 10% of the energies , as determined by the DMD Medusa force field 36 ., These structures were then clustered using the GROMACS 37 distance- based algorithm described by Daura et al . 38 ., It uses root-mean-square deviation ( RMSD ) between backbone C\u03b1 atoms as a measure of structural similarities between the cluster representatives ., A RMSD cut-off was chosen to correspond to the peak of the distribution of pair-wise RMSDs for all of the low-energy structures ., Because the energies of the resulting centroids representative of the clusters are very close to each other ( S3 Fig ) and picking one of them would potentially introduce a bias related to our scoring energy function , we presented them all as our predicted models of the \u03b1-synuclein globular structure ., We then calculated the root-mean-square deviation of atomic positions within each cluster and used this as a measure of fluctuations of the structures of corresponding centroids ( Figs 2 and 3 ) ., In order to obtain information on the global folding of \u03b1-synuclein , we performed clustering analysis on the lowest-energy structures obtained during CL-DMD simulations ., In summary , we have determined de novo the conformational ensemble of native \u03b1-synuclein in solution by short-distance crosslinking constraint-guided DMD simulations , and validated this structure with experimental data from CD , HDX , SM , and LD-CL experiments ., The predicted conformational ensemble is represented by rather compact globular conformations with transient secondary structure elements ., The obtained structure can serve as a starting point for understanding the mis-folding and oligomerization of \u03b1-synuclein .","headings":"Introduction, Methods, Results and discussion","abstract":"Combining structural proteomics experimental data with computational methods is a powerful tool for protein structure prediction ., Here , we apply a recently-developed approach for de novo protein structure determination based on the incorporation of short-distance crosslinking data as constraints in discrete molecular dynamics simulations ( CL-DMD ) for the determination of conformational ensemble of the intrinsically disordered protein \u03b1-synuclein in the solution ., The predicted structures were in agreement with hydrogen-deuterium exchange , circular dichroism , surface modification , and long-distance crosslinking data ., We found that \u03b1-synuclein is present in solution as an ensemble of rather compact globular conformations with distinct topology and inter-residue contacts , which is well-represented by movements of the large loops and formation of few transient secondary structure elements ., Non-amyloid component and C-terminal regions were consistently found to contain \u03b2-structure elements and hairpins .","summary":"As the population ages , neurodegenerative diseases such as Parkinson\u2019s disease will become an increasing problem in many countries ., Aggregation of the protein \u03b1-synuclein is the primary cause of Parkinson\u2019s disease , but there is still a dearth of structural information pertaining to the native , non-aggregating form of this protein ., A better understanding the structural state of the native protein may prove useful for the design of new therapeutics to combat this disease ., In order to obtain more structural information on this protein , we have recently modelled the native \u03b1-synuclein protein ., These models were generated using a novel approach which combines protein crosslinking and discrete molecular dynamics simulations ., We have found that the \u03b1-synuclein protein can adopt several shapes , all with a similar topology , resembling a three fingered closed claw ., A region of the protein important for aggregation was found to be protected from the surrounding biological environment in these conformations , and the stabilization of these structures may be a fruitful avenue for future drug research into mitigating the cause and effect of Parkinson\u2019s disease .","keywords":"chemical bonding, molecular dynamics, protein structure prediction, protein structure, intrinsically disordered proteins, physical chemistry, protein structure determination, proteins, chemistry, cross-linking, molecular biology, protein structure comparison, biochemistry, biochemical simulations, biology and life sciences, physical sciences, computational chemistry, computational biology, macromolecular structure analysis","toc":null} +{"Unnamed: 0":421,"id":"journal.pntd.0007577","year":2019,"title":"Kankanet: An artificial neural network-based object detection smartphone application and mobile microscope as a point-of-care diagnostic aid for soil-transmitted helminthiases","sections":"Soil-transmitted helminths ( STH ) such as Ascaris lumbricoides , hookworm , and Trichuris trichiura affect more than a billion people worldwide 1\u20133 ., However , due to lack of access to fecal processing materials , diagnostic equipment , and trained personnel for diagnosis , the mainstay of STH control remains mass administration of antihelminthic drugs 4 ., To diagnose STH in residents of rural areas , the present standard is the Kato-Katz technique ( estimated sensitivity of 0 . 970 for A . lumbricoides , 0 . 650 for hookworm , and 0 . 910 for T . trichiura; estimated specificity of 0 . 960 for A . lumbricoides , 0 . 940 for hookworm , and 0 . 940 for T . trichiura ) 5 ., However , this method is time-sensitive due to rapid degeneration of hookworm eggs 5 ., Other methods , including fecal flotation through FLOTAC and mini-FLOTAC still have higher sensitivity ( 0 . 440 ) than direct fecal examination ( 0 . 360 ) , but require centrifugation equipment , which is expensive and difficult to transport 6 ., Multiplex quantitative PCR analysis for these three species is a high sensitivity and specificity technique ( 0 . 870\u20131 . 00 and 0 . 830\u20131 . 00 , respectively ) , but can only be performed with expensive laboratory equipment 7 , 8 ., Spontaneous sedimentation technique in tube ( SSTT ) analysis has been found in preliminary studies to be not inferior to Kato-Katz in A . lumbricoides , T . trichiura , and hookworm 9 , 10 ., Since it requires no special equipment and few materials , it has the potential to be a cost-effective stool sample processing method in the field ., Mass drug administration campaigns are the prevailing strategy employed to control high rates of STH ., Such campaigns , however , are focused on treating children and do not necessarily address the high infection prevalence rates of STH in adults , which in turn may contribute to the high reinfection rates 11 , 12 ., Technology that facilitates point-of-care diagnosis could enable mass drug administration programs to screen adults for treatment , monitor program efficacy , aid research , and map STH prevalence ., In areas close to STH elimination , such a tool could facilitate a test-and-treat model for STH control ., One avenue for point-of-care diagnostic equipment is smartphone microscopy ., Numerous papers have already demonstrated the viability of using smartphones 13\u201315 and smartphone-compatible microscopy attachments ( USB Video Class , or UVC ) 16 as cheap point-of-care diagnostic tools ., Studies have tried direct imaging , as with classical parasitological diagnosis 17 , fluorescent labeling 14 , and digital image processing algorithms to aid diagnosis 18 ., To address the need for trained parasitologists to make the STH diagnosis , this study investigated artificial neural network-based technology ( ANN ) ., ANN , a framework from machine learning , a subfield of artificial intelligence , has seen a rapid explosion in range of applications , from object detection to speech recognition to translation ., Rather than traditional software , which relies on a set of human-written rules for image classification , a method explored in other studies 19 , ANN image processing stacks thousands of images together and uses backpropagation , a recursive algorithm to create its own rules to classify images ., A previous study has applied ANN-based systems to diagnostic microscopy of STH with moderate sensitivity , using a device of comparable price to a smartphone to image samples and applying a commercially available artificial intelligence algorithm ( Web Microscope ) to classify the samples ., However , such a device requires internet connection to function and was only validated on 13 samples 20 , 21 ., Another study has created and patented an ANN-based system to identify T . trichiura based on a small dataset of sample images ( n<100 ) 22 ., However , there is no precedent in current literature for extensive ( n>1 , 000 ) ANN-based object detection system training for multiple STH species , nor use in smartphones , nor offline use ( disconnected from the internet ) , nor field testing in specimens ., This study developed such a system , named Kankanet from the English word network and the Malagasy word for intestinal worms , kankana ., This study also uses a smartphone-compatible mobile microscope , or UVC , with a simple X-Y slide stage ., As a proof-of-concept pilot study for ANN-assisted microscopy , this project aimed to address two key obstacles to point-of-care diagnosis of STH in rural Madagascar: ( 1 ) the lack of portable and inexpensive microscopy , and ( 2 ) the limited capacity and expertise to read microscope images ., This project evaluated the efficacy for diagnosis of three species of STH of ( 1 ) a UVC and ( 2 ) Kankanet , an object-detection ANN-based system deployed through smartphone application ., This study was a part of a larger study on the Assessment of Integrated Management for Intestinal Parasites control: study of the impact of routine mass treatment of Helminthiasis and identification of risk areas of transmission in two villages in the district of Ifanadiana , Madagascar ., This study has received institutional review board approval from the Stony Brook University ( ID: 874952\u201313 ) and the national ethics review board of Madagascar: Comit\u00e9 d\u2019\u00c9thique de la Recherche Biom\u00e9dicale Aupr\u00e8s du Minist\u00e8re de la Sant\u00e9 Publique de Madagascar ( 41-MSANP\/CERBM , June 8 , 2017 ) ., As a prospective study , data collection was planned before any diagnostic test was performed ., In accordance with cultural norms , consent was first required from the local leaders before engaging in any activities within their purview ., All participants received oral information about the study in Malagasy; written informed consent was obtained from adult participants or parents\/legal guardians for the children ., Since this study was meant to evaluate diagnostic methods and did not produce definitive results , no diagnostic results from this study were reported to the patients ., All inhabitants of the two study villages were given their annual dose of 400 mg albendazole one year before this study , and received another 400 mg albendazole dose within a month of the conclusion of the study by the national mass drug administration effort ., A unique identifier was assigned to each participant to allow grouping of analysis data for each patient ., All data was stored on an encrypted server , to which only investigators had access ., The two villages under study , Mangevo and Ambinanindranofotaka ( geographic coordinates: 21\u00b027S , 47\u00b025E and 21\u00b028S , 47\u00b024E ) , are rural villages situated on the edge of Ranomafana National Park , about 275 km south of Antananarivo , the capital of Madagascar ., Over 95% of households in Ambinanindranofotaka ( total population , n = 327 ) and Mangevo ( total population , n = 238 ) engage in subsistence farming and animal husbandry ., The villages , accessible only by 14 hours\u2019 worth of footpaths , are tucked between mountain ridges covered with secondary-growth rainforest ., The study was conducted between 8 Jun 2018 and 18 Jun 2018 ., All residents of each village were given a brief oral presentation about the public health importance , symptoms and prevention of STH; subjects above age 16 , the Madagascar cut-off age for adulthood , who gave voluntary consent to participate in the study were given containers and gloves to collect their own fecal samples ., Parents gave consent for their assenting children and collected their fecal samples ., One fecal sample from each participant was submitted between the hours of sunrise and sunset ., Samples were processed for analysis within 20 minutes of production by participant ., Cognitively impaired subjects were excluded ., Each fecal sample produced three slides for microscopic analysis: ( 1 ) one slide was prepared according to Kato-Katz ( KK ) technique from fresh stool; ( 2 ) one slide was prepared according to spontaneous sedimentation technique in tube ( SSTT ) from 10% formalin-preserved stool; ( 3 ) one slide was prepared according to Merthiolate-Iodine-Formaldehyde ( MIF ) technique from 10% formalin-preserved stool ., As a reference test , a modified gold standard was defined as any positive result ( at least one egg positively identified in a sample ) from standard microscopy by trained parasitologists using ( 1 ) KK , ( 2 ) SSTT , and ( 3 ) MIF techniques ., Intensity of infection ( measured by eggs\/gram ) of A . lumbricoides , T . trichiura , and hookworm were obtained by standard microscopy reading of KK slides by multiplying the egg count per slide reading by the standard coefficient of 24 ., SSTT technique followed standard protocol 23 ., This measure was defined to increase the sensitivity of the reference test ., A standard Android smartphone was attached to a UVC ( Magnification Endoscope , Jiusion Tech; Digital Microscope Stand , iTez ) for microscopic analysis of KK and SSTT slides in the field ( Fig 1 ) ., Clinical information or results from any other analyses of the fecal samples was not made available to slide readers during their analysis ., TensorFlow is an open-source machine learning framework developed by Google Brain ., Using the TensorFlow repository , this study developed Kankanet , an ANN-based object detection system built upon a Single Shot Detection meta-architecture and a MobileNet feature extractor , a convolutional neural network developed for mobile vision applications 24 , 25 ., Based on a dataset of 2 , 078 images of STH eggs , Kankanet was trained to recognize three STH species: A . lumbricoides , T . trichiura , and hookworm 26 ., 597 egg pictures were taken by a standard microscope and 1 , 481 were taken by UVC ., The efficacy of Kankanet diagnosis was evaluated with a separate dataset of 186 images with a comparable distribution of species and imaging modalities ., The detailed breakdown of the composition of these image sets is shown in Table 1 , which shows percentage distributions by species and imaging modality to show concordance in image distribution between training set and evaluation set ., The following hyperparameters were used: initial learning rate = 0 . 004; decay steps = 800720; decay factor = 0 . 95 , according to the default configuration used to train open-source models released online ., To improve the robustness of the model , the dataset was augmented using the default methods of random cropping and horizontal flipping ., The loss rate was monitored until it averaged less than 0 . 01 , as shown in Fig 2 , after which the model was frozen in a format suitable for use in a mobile application ., Based on this protocol , two models were trained: It took Model 1 around 81 and Model 2 around 12 epochs , or iterations through the entire training dataset , to reach the loss rate of less than 0 . 01 ., These models were then validated by being tested from randomly selected images from the evaluation image set ( n = 185 ) , images that were not included in the training set ., Once trained , these models analyze images in real time , project a bounding-box over each detected object , and display the name of the object detected , along with a confidence rating ( Fig 3 and Fig 4 ) ., The true readings of each image in the training and test image sets were determined by a trained parasitologist ., The Kankanet models then were used to read test set images , and correctly identified eggs were considered true positives , incorrect objects identified as eggs were considered false positives , undetected eggs were considered false negatives , and images without eggs or detected objects were considered true negatives ., Evaluation of model sensitivity and specificity was performed with the following test image sets: The open-source TensorFlow library contains a demo Android application that includes an object-detection module ., Following the protocol for migrating this TensorFlow model to Android 27 , the original object detection model on the app was swapped out for the Kankanet model ., As per the original app , the threshold for reporting detected objects was set at 0 . 60 confidence ., Intended sample size was calculated based on June 2016 prevalence rates in Ifanadiana , Madagascar ( n = 574 ) : A . lumbricoides 71 . 3% ( 95% CI 67 . 7\u201375 . 1 ) ; T . trichiura 74 . 7% ( 95% CI 71 . 1\u201378 . 2 ) ; hookworm 33 . 1% ( 95% CI 29 . 2\u201336 . 9 ) 28 ., Following the calculations for a binary diagnostic test for the species with the lowest prevalence , hookworm , with a predicted sensitivity of the test of 90% and a 10% margin of error , the required sample size to have adequate power was determined to be 115 ., For A . lumbricoides and T . trichiura , which have higher prevalence rates , a sample size of 115 gave sufficient power to support a sensitivity of 70% with a margin of error of 10% ., This study used a sample size of 113 fecal samples ., Readings from the UVC on KK and SSTT slides were compared against the modified gold standard , which is defined as any positive result from a standard microscopy reading of KK , SSTT , and MIF techniques by a parasitologist ., In SPSS , sensitivity and specificity of the UVC reading were calculated for each species with KK , SSTT , and combined analysis ., Separate analyses were calculated for different intensities of infection as classified according to WHO guidelines 4 ., Cohen\u2019s Kappa coefficient ( K ) was calculated for each type of fecal processing method to determine comparability to the modified gold standard reading ., Results from Kankanet interpretation were compared to visual interpretation of the same images by a trained parasitologist ., The two models were evaluated for sensitivity , specificity , positive predictive value , and negative predictive value using SPSS ., There were no samples that had missing results from any of the tests run ., The number of positive samples identified by standard microscopy through the Kato-Katz , MIF , and SSTT preparation methods are shown in Table 2 , as well as the composite reading used as the modified gold standard in this study of the three tests ., The number of samples of A . lumbricoides and T . trichiura at each intensity level is reported in Table 3 ., There were no participants heavily infected with T . trichiura ., Since it was not possible for the KK slides to be transported to the laboratory in time for quantification of hookworm eggs , we were unable to detect the intensity of infection of these cases ., The UVC performed best at imaging A . lumbricoides ( Tables 4 and 5 ) , demonstrating higher sensitivity in SSTT preparations ( 0 . 829 , 95% CI . 744- . 914 ) than in KK ( 0 . 579 , 95% CI . 468- . 690 ) , and high specificity in both SSTT and KK ( 0 . 971 , 95% CI . 915\u20131 . 03; 0 . 971 , 95% CI . 915\u20131 . 03 ) ., These sensitivity numbers increased with increasing infection intensity ( Fig 5 ) ., UVC imaging of SSTT slide preparations of samples with AL showed a substantial level of concordance with the modified gold standard reading , which was obtained through standard microscopy ( K = 0 . 728 ) , and UVC imaging of KK slide preparations demonstrated moderate concordance with the modified gold standard ( K = 0 . 439 ) ., For T . trichiura , the UVC demonstrated low overall sensitivity through SSTT and KK ( 0 . 224 , 95% CI . 141- . 307; 0 . 235 , 95% CI . 151- . 319 , respectively ) , but high specificity ( 0 . 917 , 95% CI . 761\u20131 . 07; 1 , 95% CI 1 . 00\u20131 . 00 ) ., As infection intensity of T . trichiura increased , however , sensitivity increased ( Fig 5 ) ., According to WHO categories for infection intensity , sensitivity for low-intensity infections was 0 . 164 , which increased to 0 . 435 in moderate-intensity infections ., There was little agreement with the modified gold standard ( K = 0 . 038 for SSTT , K = 0 . 063 for KK ) ., The UVC also demonstrated low sensitivity to hookworm eggs in both SSTT ( 0 . 318 , 95% CI . 123- . 513 ) and KK ( 0 . 381 , 95% CI . 173- . 589 ) preparations ., Model 1 , which was trained and evaluated on microscope images only , demonstrated high sensitivity ( 1 . 00; 95% CI 1 . 00\u20131 . 00 ) and specificity ( 0 . 910; 95% CI 0 . 831\u20130 . 989 ) for T . trichiura , low sensitivity ( 0 . 571; 95% CI 0 . 423\u20130 . 719 ) and specificity ( 0 . 500; 95% CI 0 . 275\u20130 . 725 ) for A . lumbricoides , and low sensitivity ( 0 . 00; 95% CI 0 . 00\u20130 . 00 ) and specificity ( 0 . 800; 95% CI 0 . 693\u20130 . 907 ) for hookworm ., Table 6 shows the full breakdown of sensitivity , specificity , positive predictive value , and negative predictive value of the different analyses performed by Model 1 and Model 2 ., Though Model 1 was also evaluated for its performance on UVC pictures of STH , it failed to recognize any , and thus the results are not tabulated ., Model 2 was trained on images taken both with microscopes and with UVC , and was tested with both types of images ., It outperformed Model 1 in every parameter , with high sensitivity and specificity for microscope images all across the board and for UVC images of A . lumbricoides and hookworm ., It performed poorly on UVC images of T . trichiura ( sensitivity 0 . 093 , 95% CI -0 . 138\u20130 . 304; specificity 0 . 969 , 95% CI 0 . 934\u20131 . 00 ) , but had moderate PPV and NPV values ( 0 . 667 and 0 . 800 , respectively ) ., This study found that UVC imaging of SSTT slides , though of low quality , still could be read by trained parasitologists with a high sensitivity ( 0 . 829 , 95% CI . 744- . 914 ) and specificity ( 0 . 971 , 95% CI . 915\u20131 . 03 ) in A . lumbricoides , which is comparable to literature estimates of KK sensitivity at 0 . 970 and specificity of 0 . 960 5 ., The UVC showed lower sensitivity for KK preparations ( 0 . 579 , 95% CI . 468- . 690 ) ., This UVC does not have sufficient image quality to be used with T . trichiura or hookworm diagnosis , which have thinner and more translucent membranes ., Despite UVC imaging having high sensitivity for A . lumbricoides , the 14% difference in sensitivity needs improvement , with a goal of reaching similar sensitivity to standard microscopy , before it can be feasibly used in large-scale STH control efforts ., UVC\u2019s specificity of 0 . 971 ( 95% CI 0 . 915\u20131 . 03 ) surpasses that of standard microscopy KK\u2019s 0 . 960 specificity ., Though currently shown to have insufficient sensitivity or specificity for use with T . trichiura or hookworm diagnosis , these are limitations believed to be related to the particular microscope peripheral used in this study ., This UVC achieved maximum magnification of approximately 215X at 600 px\/mm; its resolution was 640x480 pixels ., The magnification level with this peripheral is sufficient , as other studies have shown success with T . trichiura with magnification levels as low as 60X 29 ., However , for the purposes of STH imaging , improvement of resolution and light source in this UVC may be necessary ., Another study successfully imaged T . trichiura and hookworm at a resolution of 2595x1944 pixels , which is substantially higher than the 640x480 with this peripheral 20 ., This UVC\u2019s light source comes from the same direction as the camera , rather than shining through the sample as in most microscopy , which may have reduced image quality and imaging ability ., Development of a proprietary microscope is another solution , which many other studies have employed: a mobile phone microscope developed by Coulibaly et al . has demonstrated similarly high sensitivity for Schistosoma mansoni ( 0 . 917; 95% CI 0 . 598\u20130 . 996 ) , Schistosoma haematobium ( 0 . 811; 95% CI 0 . 712\u20130 . 883 ) and Plasmodium falciparum ( 0 . 802 , 1 . 00 ) 30 , 31; other studies that employ ball lenses or low-cost foldable chassis show slightly lower sensitivity\/specificity values 29 , 32 ., Independent development of a smartphone microscope could substantially improve the sensitivity and specificity of these devices to an acceptable level for healthcare use , that is , not inferior to standard microscopy , while simultaneously decreasing the cost per microscope ., However , the advantage of using a commercially available microscope is ease of access for rapid , large-scale implementation and feasibility for low-income rural areas with a heavy burden of STH ., In the context of these villages in rural Madagascar , where STH prevalence can be as high as 93 . 0% for A . lumbricoides , 55 . 0% for T . trichiura , and 27 . 0% for hookworm as measured in 1998 33 , yet only school-aged children receive for mass drug administration , a rule-in test with high specificity , which this UVC achieves , can be useful to reliably identify adults who would also require antihelminthics ., Another context in which this tool may be especially useful is areas close to elimination of STH , to reduce the amounts of antihelminthics needed for STH control 34 ., Though Kankanet interpretation of UVC and microscope images yielded lower sensitivity than trained parasitologist readings of these images , Kankanet Model 2 still achieved high sensitivity for A . lumbricoides ( 0 . 696; 95% CI 0 . 625\u20130 . 767 ) and hookworm ( 0 . 714; 95% CI 0 . 401\u20131 . 027 ) on both microscope and UVC images ., Model 2 showed high sensitivity for T . trichiura in microscope images ( 1 . 00; 95% CI 1 . 00\u20131 . 00 ) , but low in UVC images ( 0 . 083; 95% CI -0 . 138\u20130 . 304 ) ., Model 1 achieved lower sensitivity and specificity for all species , and could not accurately interpret UVC images ., Model 2\u2019s overall sensitivity for A . lumbricoides , T . trichiura , and hookworm ( 0 . 696 , 0 . 154 , and 0 . 714 , respectively ) may not seem very high at first ., However , these are sensitivity results given for recognizing individual eggs ., As an indication for treatment with antihelminthics would only require one egg per fecal sample slide to be positively identified , the real likelihood of this ANN-based object detection model giving an accurate reading is much higher than the per-egg sensitivity cited here ., For example , even in an infection of A . lumbricoides at the middle of the range considered low-intensity ( 2500 eggs per gram ) , a slide would contain 104 eggs , making the sensitivity of detection of infection in the slide nearly 1 . 00 ., The difference in sensitivity and specificity between the models can be explained by the differences in image sets used for training ., Model 2 was trained with an image set of over twice the size of Model 1\u2019s image set; Model 2\u2019s image set also contained images from both UVC and standard microscopy modalities ., It was a robust model , accurately detecting STH in images with multiple examples of multiple species , despite being trained on an image set containing mostly A . lumbricoides ., It demonstrated a very low rate of false positives , considering the amount of debris apropos to fecal samples ., The Kankanet models can be improved by developing a larger image dataset , exploring other object detection meta-architectures , and optimizing file size and computational requirements ., A greater number and more even distribution of images of parasite species would improve object detection model sensitivity ., Standard laboratory processing and diagnosis of STH is extremely time-consuming and expensive and hence , not often practical for rural low-income communities ., As smartphone penetrance will only increase in the coming years , medical technology should leverage smartphones as portable computational equipment , as use and distribution of such software requires no additional cost ., Because it is able to be attached to smartphones and requires no external power source than the smartphone itself , UVC is a suitable microscopy option for point-of-care diagnosis ., In addition , the smartphone application used in this study did not require internet access , unlike those of previous studies 20 ., UVC and Kankanet are cost-effective , with only the initial cost of $69 . 82 for the microscope and stage setup , as well as the negligible cost of fecal analysis reagents ., In the case of SSTT , only microscope slides and Lugol\u2019s iodine would be needed for fecal processing ., These initial costs are readily defrayed by the thousands of analyses performed with just one unit , the work-hours gained by timely treatment of STH and prevention of STH re-infection , and the reduction of unnecessary drug administration and concomitant drug resistance ., A detailed cost analysis comparing the cost of standard microscopy and the Kankanet system for 2-sample Kato-Katz testing of 10 villages in rural Madagascar ( estimated 3000 people total ) is shown in Table 7 ., Whereas standard microscopy ends up costing around 1 . 33 USD per person tested , the Kankanet system costs around 0 . 56 USD per person tested ., ANN-based object detection systems such as the one introduced here can be useful for screening STH-endemic communities in the context of research , mass drug administrations and STH mapping programs ., In addition , Kankanet , rather than replacing human diagnosis , could be a useful diagnostic training aid for healthcare workers and field researchers ., With sustained use of such a tool , these workers may more quickly learn how to identify such eggs themselves ., Limitations of this study include that the UVC used was of insufficient image quality to produce accurate imaging of T . trichiura and hookworm ., The Kankanet models employed used a dataset limited to two imaging modalities: standard microscopy and UVC , and with images of only three species of STH; in addition , images for this dataset were only taken of samples prepared under KK conditions , so the efficacy of this system can only be assessed for those conditions ., We conclude that parasitologist interpretation of UVC imaging of SSTT slides can be a field test comparable to standard microscopy of KK for A . lumbricoides ., Second , we conclude that ANN interpretation is a feasible avenue for development of a point-of-care diagnostic aid ., With 85 . 7% sensitivity and 87 . 5% specificity for A . lumbricoides , 100 . 0% sensitivity and 100 . 0% specificity for T . trichiura , and 66 . 7% sensitivity , 100 . 0% specificity for hookworm , Kankanet Model 2 has demonstrated stellar results in interpreting UVC images , even though it was trained with a limited proof-of-concept dataset ., We hope that continued expansion of the Kankanet image database , improved imaging technology , and improvement of machine learning technology will soon enable Kankanet to achieve rates comparable to those of parasitologists .","headings":"Introduction, Methods, Results, Discussion","abstract":"Endemic areas for soil-transmitted helminthiases often lack the tools and trained personnel necessary for point-of-care diagnosis ., This study pilots the use of smartphone microscopy and an artificial neural network-based ( ANN ) object detection application named Kankanet to address those two needs ., A smartphone was equipped with a USB Video Class ( UVC ) microscope attachment and Kankanet , which was trained to recognize eggs of Ascaris lumbricoides , Trichuris trichiura , and hookworm using a dataset of 2 , 078 images ., It was evaluated for interpretive accuracy based on 185 new images ., Fecal samples were processed using Kato-Katz ( KK ) , spontaneous sedimentation technique in tube ( SSTT ) , and Merthiolate-Iodine-Formaldehyde ( MIF ) techniques ., UVC imaging and ANN interpretation of these slides was compared to parasitologist interpretation of standard microscopy . Relative to a gold standard defined as any positive result from parasitologist reading of KK , SSTT , and MIF preparations through standard microscopy , parasitologists reading UVC imaging of SSTT achieved a comparable sensitivity ( 82 . 9% ) and specificity ( 97 . 1% ) in A . lumbricoides to standard KK interpretation ( 97 . 0% sensitivity , 96 . 0% specificity ) ., The UVC could not accurately image T . trichiura or hookworm ., Though Kankanet interpretation was not quite as sensitive as parasitologist interpretation , it still achieved high sensitivity for A . lumbricoides and hookworm ( 69 . 6% and 71 . 4% , respectively ) ., Kankanet showed high sensitivity for T . trichiura in microscope images ( 100 . 0% ) , but low in UVC images ( 50 . 0% ) ., The UVC achieved comparable sensitivity to standard microscopy with only A . lumbricoides ., With further improvement of image resolution and magnification , UVC shows promise as a point-of-care imaging tool ., In addition to smartphone microscopy , ANN-based object detection can be developed as a diagnostic aid ., Though trained with a limited dataset , Kankanet accurately interprets both standard microscope and low-quality UVC images ., Kankanet may achieve sensitivity comparable to parasitologists with continued expansion of the image database and improvement of machine learning technology .","summary":"For rainforest-enshrouded rural villages of Madagascar , soil-transmitted helminthiases are more the rule than the exception ., However , the microscopy equipment and lab technicians needed for diagnosis are a distance of several days\u2019 hike away ., We piloted a solution for these communities by leveraging resources the villages already had: a traveling team of local health care workers , and their personal Android smartphones ., We demonstrated that an inexpensive , commercially available microscope attachment for smartphones could rival the sensitivity and specificity of a regular microscope using standard field fecal sample processing techniques ., We also developed an artificial neural network-based object detection Android application , called Kankanet , based on open-source programming libraries ., Kankanet was used to detect eggs of the three most common soil-transmitted helminths: Ascaris lumbricoides , Trichuris trichiura , and hookworm ., We found Kankanet to be moderately sensitive and highly specific for both standard microscope images and low-quality smartphone microscope images ., This proof-of-concept study demonstrates the diagnostic capabilities of artificial neural network-based object detection systems ., Since the programming frameworks used were all open-source and user-friendly even for computer science laymen , artificial neural network-based object detection shows strong potential for development of low-cost , high-impact diagnostic aids essential to health care and field research in resource-limited communities .","keywords":"invertebrates, medicine and health sciences, engineering and technology, helminths, tropical diseases, hookworms, geographical locations, parasitic diseases, animals, cell phones, neuroscience, artificial neural networks, ascaris, ascaris lumbricoides, pharmaceutics, artificial intelligence, computational neuroscience, drug administration, neglected tropical diseases, africa, computer and information sciences, madagascar, communication equipment, people and places, helminth infections, eukaryota, equipment, nematoda, biology and life sciences, drug therapy, soil-transmitted helminthiases, computational biology, organisms","toc":null} +{"Unnamed: 0":1519,"id":"journal.pcbi.1000300","year":2009,"title":"Alu Exonization Events Reveal Features Required for Precise Recognition of Exons by the Splicing Machinery","sections":"How are short exons , embedded within vast intronic sequences , precisely recognized and processed by the splicing machinery ?, Despite decades of molecular and bioinformatic research , the features that allow recognition of exons remain poorly understood ., Various factors are thought to be of importance ., These include the splicing signals flanking the exon at both ends , known as the 5\u2032 and 3\u2032 splice sites ( 5\u2032ss and 3\u2032ss , respectively ) , auxiliary cis-elements known as exonic and intronic splicing enhancers and silencers ( ESE\/Ss and ISE\/S ) that promote or repress splice-site selection , respectively 1 , 2 , and exon 3 and intron length 4 ., There is an increasing body of evidence that secondary structure is a powerful modifier of splicing events 5\u201312 ., Secondary structure is thought to present binding sites for auxiliary splicing factors , correctly juxtapose widely separated cis-elements , and directly affect the accessibility of the splice sites ., However , only very few studies have used bioinformatic approaches to broadly study the effects of secondary structure on splicing 13\u201315 ., Many of the above-listed factors have been subjected to analysis in the context of comparison between constitutively and alternatively spliced exons ., It has been found , for example , that constitutively spliced exons are flanked by stronger splicing signals , that they contain more ESEs but fewer ESSs , and are longer but flanked by shorter introns with respect to their alternatively spliced counterparts ( reviewed in 16 ) ., However , to what extent do these features contribute to the selection of exons and allow discrimination between true exons and \u201cnon-exons\u201d , i . e . sequences resembling exons but not recognized by the splicing machinery ?, This question is fundamental for understanding the process of exon selection by the spliceosome , and yet has not been subjected to much analysis ., This is presumably because unlike alternatively and constitutively spliced exons , both of which are relatively easy to define computationally , defining a non-exon or a pseudo-exon is more of a challenge ., One approach is to compare exons to sequences of up to a certain length which are flanked by splicing signals exceeding a certain threshold 17 , 18 ., Although this approach is powerful and has contributed to the discovery of the \u201cvocabulary\u201d of exons , it is also limited ., The primary limitation is that it is circular: For the mere definition of pseudo-exons , we are forced to fix various features\u2014such as minimal splice site strength and exon length\u2014that we would prefer to infer ., To circumvent these obstacles , we have studied Alu exonization events ., Alu elements are primate-specific retroelements present at about 1 . 1 million copies in the human genome ., A large portion of Alu elements reside within introns 19 ., Alus are dimeric , with two homologous but distinct monomers , termed left and right arms 20\u201322 ., During evolution , some intronic Alus accumulated mutations that led the splicing machinery to select them as internal exons , a process termed exonization 23\u201325 ., Such exonization events may occur either from the right or the left arm of the Alu sequence , but are observed predominantly in the antisense orientation relative to the mRNA precursor ., Almost invariably , such events give birth to an alternatively spliced exon , as a constitutively spliced exon would compromise the original transcriptomic repertoire and hence probably be deleterious 19 , 24 , 26 , 27 ., The fact that exonizing and non-exonizing Alus have retained high sequence similarity but are perceived as different by the splicing machinery makes them excellent candidates for studying the factors required for precise recognition of exons by the spliceosome ., The natural control group of non-exonizing Alus obviates the need to fix different parameters in the control set , and the high degree of sequence similarity shared by all Alus , regardless of whether they do or do not undergo exonization , enables direct comparison of a wide array of features ., Based on the comparison between Alu exons and their non-exonizing counterparts , we were able to identify several key features that characterize Alu exons and to determine the relative importance of these features in the process of Alu exonization ., A novel result of this comparison was the importance of pre-mRNA secondary structure: More thermodynamically stable predicted secondary structure in an Alu arm harboring a potential Alu exon decreases the probability of an exonization event originating from this Alu ., Thus , this study is among the first to provide wide-scale statistical proof of the importance of secondary structure in the context of exon selection ., We identified numerous further factors differentiating between Alu exons and non-exons , and integrated them in a machine learning classification model ., This model displayed a high performance in classifying Alu exons and non-exons ., Moreover , the strength of predictions by this model correlated with biological inclusion levels , and higher probabilities of exonization were given by the model to constitutive exons than to alternative ones ., These findings indicate that the features identified in this study may form the basis for precise exon selection , and make the difference between a non-selected element , an alternatively-selected element , and a constitutively selected one ., We set out to determine the features underlying the recognition of Alu exons by the splicing machinery ., We therefore required datasets of Alus that undergo and that do not undergo exonization ., We took advantage of the fact that Alu elements may exonize either from the right or from the left arm , and composed three core datasets ( Figure 1A ) : ( 1 ) A dataset of 313 Alu exons ( AEx ) that are exonized from the right Alu arm , termed AEx-R; ( 2 ) A dataset of 77 Alus that undergo exonization in the left arm , termed AEx-L; ( 3 ) A dataset of 74 , 470 intronic Alus lacking any evidence of exonization , called No AEx ., In all these datasets , Alus had to be embedded in the antisense orientation within genes , since most exonization events of Alus occur in this orientation 19 , 23 , 28 ., Finally , to allow direct comparison between parallel positions in different Alus , we used pairwise alignments to align each Alu in each of the datasets against an Alu consensus sequence ., We next computationally searched for the optimal borders , or splice sites , of non-exons within both the right arm and the left arm of the sequences in the No AEx dataset ., This was done in two steps: ( 1 ) We first empirically determined the positional windows in which the selected 3\u2032ss and 5\u2032ss appeared within exonizing Alus; ( 2 ) We next searched the above-determined positional windows for the highest scoring splicing signals ( see Materials and Methods ) ., We found that computational selection of the highest scoring splicing signal yielded a high extent of congruence ( ranging between 74%\u201396% , depending on the arm and on the signal ) with the \u201ctrue\u201d splicing signal based on EST data ., Since the congruence was not perfect , we created two control datasets based on the AEx-R and AEx-L group , termed AEx-R ( c ) and AEx-L ( c ) , respectively , in which exon borders were searched for computationally as in the No AEx dataset ., These two subsets were used to verify that differences between the exonizing and non-exonizing datasets were not due to the manner in which exons and non-exons were derived ( ESTs versus computational predictions ) ., To complete the picture , we computationally searched for non-exons within the right arm of the AEx-L group and in the left arm of the AEx-R group ., Notably , we demanded that all exons within all datasets have a minimal potential 3\u2032ss ( AG ) and 5\u2032ss ( GT\/GC ) , because lacking such minimal conditions Alus cannot undergo exonization at all ., Thus , our analyses are based on three core and two control sets of Alus with two sets of start and end coordinates mapped for each Alu\u2014one in the right arm and one in the left ( see Materials and Methods for further details ) ., Previous studies , based on much smaller datasets , implicated the 3\u2032ss 24 and the 5\u2032ss 26 splicing signals as major factors determining exonization events ., To assess whether this held for our dataset as well , we calculated the strength of the 5\u2032ss and 3\u2032ss of the exons\/non-exons in the right and in the left arms in each of the five datasets ., Indeed , we found that in the right arms the 3\u2032ss and the 5\u2032ss scores were highest among those Alus that underwent exonization ( Figure 1B and 1C , respectively ) ., Similarly , in the left arms , the scores of the 3\u2032ss and the 5\u2032ss are highest among the exonizing Alus ( Figure 1D and 1 E , respectively ) ., These results were highly statistically significant ( see Text S1 ) ., Moreover , these differences are even more pronounced when comparing the two control datasets to their non-exonizing counterparts ( compare the results for AEx-R and AEx-L to AEx-R ( c ) and AEx-L ( c ) , respectively , in Figure 1B\u2013E ) ., Thus , these analyses fit in with previous analyses emphasizing the role of the two major splicing signals ., We were interested in assessing the role of secondary structure in the context of Alu exonization events ., We therefore began by computing the thermodynamic stabilities of the secondary structures predicted for the Alus in each of the core datasets ., We used RNAfold 29 to calculate the secondary structure partition function; but rather than use this metric directly , we used a dinucleotide randomization approach to yield a Z-score that is not sensitive to sequence length or nucleotide composition ( see Materials and Methods ) ., We found that Alus that gave rise to exonization events , regardless of whether from the left or from the right arm , were characterized by weaker secondary structures than Alus that do not undergo exonization ( Figure 2A ) ., This was highly significant in the case of exonizations originating from the right arm ( AEx-R vs . No AEx p\\u200a=\\u200a9 . 8E\u221212 ) and of borderline significance for the left arm exonizations ( AEx-L vs . No AEx p\\u200a=\\u200a0 . 07 ) ., This provided the first indication that strong secondary structures might prevent Alu exonizations ., To pinpoint the subsequences to which the differences in strength of secondary structure could be attributed , we next calculated secondary structure Z-scores for each of the two Alu arms separately ., We found that the secondary structures of right and the left arms were weakest in cases in which these arms undergo exonization ( Figure 2B and 2C , respectively ) ., These changes relative to the No AEx group were highly significant ( p\\u200a=\\u200a2E\u221215 and p\\u200a=\\u200a1 . 08E\u22125 , respectively ) ., Interestingly , the non-exonizing arm tended to have weaker secondary structure in those cases in which the opposite arm underwent exonization ( p\\u200a=\\u200a0 . 001 when comparing the left arm of the AEx-R to the No AEx dataset , and p\\u200a=\\u200a0 . 055 when comparing the right arm of the AEx-L to the No AEx dataset ) ., These observations suggested that secondary structures have a detrimental effect on the recognition of Alu exons primarily when the structure incorporates sequence from the exon itself , but also when stable structures are located in relative proximity to the exon ., Secondary structure has been shown to impair exon recognition by affecting the accessibility of splice sites 8 , 9 , 11 , 12 , 30 ., To examine whether sequestration of splice sites within secondary structures plays a role in the context of Alu exonizations , we used a measure indicating the probability that all bases in a motif are unpaired ( denoted probability unpaired or PU value ) 31 ., Briefly , this measure indicates the probability that a motif , located within a longer sequence , is participating in a secondary structure ., Higher values indicate that the motif is more likely to be single stranded and lower values indicate a greater likelihood of participating in a secondary structure ( see Materials and Methods ) ., We assessed the single strandedness of the two most frequently selected 5\u2032ss in the right arm located at positions 156 and 176 relative to the consensus ( also termed sites B and C 28 ) and the most frequently selected 5\u2032ss of the left arm , located at position 291 ( see Figure 2A ) ., We found that 5\u2032ss selected in exonization events are characterized by significantly higher PU values than their non-exonizing counterparts , indicating that selected 5\u2032ss have a lower tendency to participate in secondary structures ( see Figure 2E\u2013G ) ., We repeated this analysis for the two most frequently selected 3\u2032ss in the right arm and the most frequently selected 3\u2032ss in the left arm , but did not observe higher single-strandedness in the selected 3\u2032ss with respect to their non-selected counterparts ( data not shown ) ., However , this finding may also be attributed to the fact that all Alus , regardless of whether they undergo exonization or not , are characterized by relatively strong 3\u2032ss , due to the poly-T stretch characterizing them ( see Discussion ) ., See Text S1 for description of a control analysis ., Intron-exon architecture has well-documented effects on splicing ., Therefore , we compared the lengths of the Alu exons to their counterpart non-exons ( diagram in Figure 3A ) ., We found that exons were \u223c10 nt longer than their non-exonizing counterparts ( Figure 3C and 3D ) ., Exons in the right arm of the AEx-R dataset were 112 nt long , on average , whereas non-exons were only 102 nt long in the No AEx dataset ., The same trend was observed in the AEx-L dataset: Exons in the left arm of the AEx-L dataset were 88 nt long , whereas the non-exons in the No AEx group were 78 nt long ., In both cases , the differences were highly statistically significant ( see Text S1 ) ., This indicates that increased exon length is an advantage in terms of exonization of Alu elements ., Analyzing the lengths of the flanking introns , we found that introns flanking Alu exons were almost 50% shorter than those flanking their non-exonizing counterparts ., Introns upstream of Alu exons in the AEx-R or AEx-L dataset were 7 , 216 and 9 , 497 nt long , respectively , on average ( Figure 3B ) , but 14 , 458 nt long upstream of the non-exons in the No AEx group ., These differences were highly significant ( No AEx vs . AEx-R p\\u200a=\\u200a1 . 38E\u221213 , No AEx vs . AEx-L p\\u200a=\\u200a0 . 0047 ) ., Highly significant findings were observed in the downstream intron as well ., These introns were 7 , 844 and 9 , 210 nt long for exons in the AEx-R and AEx-L dataset , respectively , but 14 , 808 nt long for Alus in the No AEx dataset ( Figure 3E ) ., Taken together , these results indicate that recognition of exons by the splicing machinery correlates positively with exon length but negatively with intron length , yielding insight on the constraints and the mechanism of the splicing machinery ( see Discussion ) ., Based on both biologic and bioinformatic methodologies , datasets of exonic splicing enhancers ( ESEs ) and silencers ( ESSs ) have been compiled; these sequences are believed to increase or decrease , respectively , the spliceosomes ability to recognize exons ., Indeed , exons were found to be enriched in ESRs with respect to pseudo-exons or exons 32\u201334 ., Thus , our next step was to determine the densities of ESEs and ESSs in exons and non-exons ., We made use of four datasets of exonic splicing regulators ( ESRs ) : the groups of SR-protein binding sites in ESEfinder 35 , the dataset of ESEs from Fairbrother et al . 36 , the exonic splicing regulatory sequences compiled by Goren et al . that consists mostly of ESEs 37 , and the ESS dataset compiled by Wang et al . 38 ., For each exon ( or non-exon ) in the two Alu arms ( Figure 4A ) in the three core and two control datasets , we calculated the ESR density for the four groups of ESRs ., The ESR density was calculated as the total number of nucleotides within an exon that overlap with motifs from a given dataset divided by the length of the exon ., We found that Alu exons showed a marked tendency for enrichment in ESEs and depletion in ESSs with respect to their non-exonizing counterparts ., Right arm Alu exons had significantly higher densities of ESEfinder ESEs than their counterparts in the No AEx group ( Figure 4B , p\\u200a=\\u200a0 . 00007 ) and higher densities of ESEs from Fairbrother et al . ( Figure 4C , p\\u200a=\\u200a0 . 00009 ) ., Higher densities were also observed in terms of ESEs found in Goren et al . ( Figure 4D ) , whereas slightly lower densities were observed for the ESSs of Wang et al ( Figure 4E ) ; However , the trends for the latter two datasets were not statistically significant ., In the left arms , similar tendencies were observed: Exons originating from this arm were highly enriched in ESEs of Goren et al . ( Figure 4H , p\\u200a=\\u200a0 . 0001 ) and depleted in ESSs of Wang et al . ( Figure 4I , p\\u200a=\\u200a0 . 0003 ) ., They also tended to be enriched in ESEs of Fairbrother et al . ( Figure 4G ) , although this was not significant ( p\\u200a=\\u200a0 . 12 ) ; and in this arm no differences were found in terms of ESEs of ESEfinder ( Figure 4F , p\\u200a=\\u200a0 . 72 ) ., To summarize , in all cases in which significant differences were observed , these differences reflect an increase in ESE densities in parallel with a decrease in ESS densities in exons relative to non-exons ., Since the splicing machinery is able to differentiate between exonizing and non-exonizing Alus , we were interested in discovering whether the features identified here can give rise to such precise classification ., Toward these aims , we used Support Vector Machine ( SVM ) machine learning , which has shown excellent empirical performance in a wide range of applications in science , medicine , engineering , and bioinformatics 39 ., We created two classifiers: One discriminating between non-exonizing Alus and Alus exonizing from the right arm and one discriminating between non-exonizing Alus and Alus exonizing from the left arm ., Receiver-operator curves ( ROC curves ) were used to test performance ., Briefly , ROC curves measure the tradeoff between sensitivity and specificity of a given classification ., A perfect classification with 100% sensitivity and 100% specificity will yield an area under the curve ( AUC ) of 1 , whereas a random classification will yield an AUC of 0 . 5 ( see Materials and Methods for complete details of the SVM protocol used ) ., 14 features were selected for the machine learning ., These were divided into 5 clusters: 5\u2032ss strength ( 1 feature: 5\u2032ss score ) , 3\u2032ss strength ( 1 feature: 3\u2032ss score ) , secondary structure ( 5 features: z-scores for the stability of secondary structure of the entire Alu and of each of the two Alu arms , PU values of the 5\u2032ss , and PU values of the 3\u2032ss ) , exon-intron architecture ( 3 features: lengths of upstream intron , of Alu exon , and of downstream intron ) , and ESRs ( 4 features: density in terms of each of the 4 groups of ESRs ) ., Based on the above-described features , we were able to achieve a high degree of classification between exonizing and non-exonizing Alus ., Figure 5A presents the ROC curves and AUC values for the classification between Alus exonizing from the right arm and non-exonizing Alus and Figure 5B presents these values for the classification between the Alus exonizing from the left arm and the non-exonizing ones ., The AUC values of \u223c0 . 91 , demonstrate that our features achieve a high degree of accuracy in discriminating between true exons and non-exons , thus mimicking the role of the splicing machinery ., If selection of an Alu exon is indeed determined by this set of features , then this same set of features may well also determine the inclusion level of an Alu exon ., A \u201cstrong\u201d set of features will lead to a high selection rate by the spliceosome , and hence to high inclusion levels , whereas \u201cweaker\u201d features may lead to a more reduced selection rate by the spliceosome and to lower inclusion levels ., Indeed , we found a positive , highly significant correlation between probabilities of exonization based on the SVM model and between inclusion levels of exons based on EST data in the case of right arm Alu exons ( Pearson , r\\u200a=\\u200a0 . 28 , p\\u200a=\\u200a6 . 35e\u221207 ) ., For the sake of comparison , the correlation between 5\u2032ss scores and inclusion levels is considerably lower and less significant ( r\\u200a=\\u200a0 . 15 , p\\u200a=\\u200a0 . 007 ) ., Thus , although the computational model was explicitly trained on the basis of a dichotomous input ( Alus were labeled either as exonizing or as non-exonizing ) , the model managed to capture the more stochastic nature of the spliceosomal recognition of exons ., A positive correlation existed in the left arm as well , but this correlation was not significant presumably due to the fewer number of Alus in the AEx-L dataset ., Although our model was trained on Alus , and specifically on comparing non-exonizing Alus to mostly alternatively recognized Alus , we reasoned that the same set of features which make the difference between a non-recognized and an alternatively-recognized Alu exon might also make the difference between an alternatively recognized exon and a constitutively recognized one ., We therefore applied the SVM model to datasets of constitutive and cassette exons ., For this purpose , we generated a dataset of 55 , 037 constitutive and 3 , 040 cassette exons based on EST-data ( see Materials and Methods ) ., For each of these exons , we first extracted all above-described features , and then applied the SVM model to them ., Our model classified constitutive and alternative exons as different in a highly statistically significant manner ., The mean probability of undergoing exonization , provided by the logistic regression transformed SVM model , was 73% for the constitutive exons , but only 60% for the alternative ones ( Mann-Whitney , p<2 . 2e\u221216 ) ., In addition , 82% of the constitutive exons were classified as \u201cexonizing\u201d , in comparison to only 63% of the alternative exons ., These results demonstrate that the features learned by the SVM model are relevant for exonization in general , and control not only the shift of non-exons to alternative ones , but also of alternative exons to constitutive ones ., Finally , we were interested in assessing the importance of different features in allowing correct discrimination between exonizing and non-exonizing elements ., For this purpose , we used \u0394AUC to measure the contribution of each feature cluster ., This measure compares the performance of the classification with and without each cluster of features , with greater differences indicating greater contribution of a given cluster of features to precise classification ., The feature with the highest contribution , both in the right arm ( Figure 5C ) and in the left arm ( Figure 5D ) , was the strength of the 5\u2032ss , in concordance with previous bioinformatic findings 26 ., However , much information is included in the other features as well ., The second most important feature both in the left and in the right arm was exon-intron architecture ., Secondary structure and the 3\u2032ss had a comparable contribution in the right and left arm ., Despite the differences in terms of ESR densities between the different datasets , this feature cluster had a negligent contribution to classification in the right arm , and a slightly higher one in the left arm ., Using a mutual information based metric to measure the contribution of the different features , yielded similar , consistent results ( see Text S1 ) ., In this study , we sought to determine how the splicing machinery distinguishes true exons from non-exons ., Alu exonization provided a powerful model for approaching this question ., Exonizing Alus have retained high sequence similarity to their non-exonizing counterparts but are perceived differently by the splicing machinery ., Past studies have emphasized mainly the splice sites , but our results indicate the importance additional features that lead to exonization ., These features , which include splicing signals ( splice sites and ESRs ) , exon-intron architecture , and secondary structural features , achieved a high degree of classification between true Alu exons and non-exons , demonstrating the biological relevance of these layers in determining and controlling exonization events ., Perhaps the most interesting result to emerge from this study is that secondary structure is critical for exon recognition ., It has been assumed that pre-RNA is coated in vivo by proteins 10 and that these RNA-protein interactions either prevent pre-mRNAs from folding into stable secondary structures 40 or provide pre-mRNAs with a limited time span for folding 41 ., However , an increasing number of studies are finding that secondary structure plays a crucial role in the regulation of splicing ., Secondary structures involving entire exons ( e . g . , 5\u20137 ) , the splice sites only ( e . g . , 8 , 11 , 12 ) , or specific regulatory elements 42 , 43 were shown to be involved in the regulation of alternative splicing ., Hiller et al . 14 recently found that regulatory elements within their natural pre-mRNA context were significantly more single stranded than controls ., Our current study puts these findings into a broad context , and provides bioinformatic evidence for the notion that the structural context of splicing motifs is part of the splicing code ., Such a structure , as we have shown , is detrimental for exonization in general , and specifically if it overlaps the 5\u2032ss ., Several intriguing observations can be made when merging our results based on the exonizing and non-exonizing Alus with those of the alternative and constitutive datasets ., In terms of inclusion level , these four groups form a continuum , with non-exonizing Alus having a 0% inclusion level , exonizing Alus having a mean inclusion level of 10% , cassette exons having a mean inclusion level of 25% , and constitutive exons being included in 100% of the cases ., Gradual changes when moving from non-exonizing Alus , to exonizing Alus , to alternative exons , to constitutive ones are observed in several additional features: The strength of the 5\u2032ss gradually increases from non-exonizing Alus to constitutive exons , the strength of the secondary structure gradually decreases , lengths of the upstream and downstream introns gradually decrease while length of the exons gradually increase ( see Figure 6 for detailed values ) ., These gradual changes are all coherent in biological terms: Stronger 5\u2032 splice sites allow higher affinity of binding between the spliceosomal snRNAs and the 5\u2032ss , and have well documented effects in increasing exon selection 28 , 44; stronger secondary structure can sequester binding sites of spliceosomal components; And it has been previously shown that longer flanking introns profoundly increase the likelihood that an exon is alternatively spliced 4 , and that alternative exons tend to be shorter than their constitutive counterparts ( reviewed by 16 ) , presumably due to spliceosomal constraints ., In addition , our finding that selective constraints are simultaneously applied both on the lengths of the exons and of their flanking introns suggests that the exon and its flanking introns are recognized , to some extent , as a unit ., This challenges the more traditional exon-definition and intron-definition models 3 , 45 , according to which either the exon , or its flanking introns , but not both , are recognized by the splicing machinery ., Notably , in our search for features differentiating between exonizing and non-exonizing Alus , we focused only on features which can potentially be mechanistically employed by the splicing machinery to differentiate between exons and introns ., For this reason , we did not use phylogenetic conservation , nor the age of the Alu exons , nor the location of the exonization event ( CDS vs . UTR ) as features ., Although these features are informative as well ( see Text S1 , and 32 ) , and thus may potentially boost the performance of our classifier , these cannot be directly sensed by the spliceosome ., Rather , these elements reflect the evolutionary pressures to which an exonizing Alu element is subjected ., In our study we found that introns flanking exonizing Alus are dramatically shorter than the introns flanking their non-exonizing counterparts ., These results appear to contradict recent results 46 according to which there is a tendency for new exons to form within longer introns ., However , two points must be borne in mind in this context: First , the introns flanking exonizing Alus are longer than average introns , and thus our results are consistent with the above study in that exonizations occur in longer introns ., Second , our findings may reflect an upper bound in terms of intron length within which exonization optimally occurs , and introns longer than a certain threshold may cease to be good candidates for exonization ., Our results indicate that the Alu-trained model could be applied to a more general context of alternative and constitutive exons , where it yielded coherent results ., This does not , however , imply that all findings made in the context of Alus can be directly extrapolated to exons in general ., For Alu sequences , we found the 5\u2032ss to be the most informative feature for correctly predicting exonization events , in agreement with previous findings 26 , 28 ., We found , however , that the 3\u2032ss , which was also found to play a major role in exonization 24 , is less critical ., This finding may not necessarily hold for all exons ., The relatively low contribution of the 3\u2032ss to Alu exonization may reflect the general tendency of Alus to have relatively strong splice signals at their 3\u2032 end , regardless of whether they undergo exonization or not ., This is since the poly-T track , present in all Alus in the antisense orientation , serves as a strong polypyrimidine tract 24 , 47 ., On the other hand , our results regarding the importance of ESRs are consistent with several previous studies that have found exons to be enriched in ESRs with respect to pseudo-exons , more poorly recognized exons , and introns 32\u201334 ., Thus , while the importance of different features may vary from one exon to another , our results provide a general understanding of the features impacting on exon recognition ., It is noteworthy , that the majority of Alu exonization events in our two exonizing datasets presumably reflect either errors of the splicing machinery or newly born exons , which presumably do not give rise to functional proteins ( see also 48 ) ., This is indicated by the low inclusion level of the Alu exons , averaging 13% and 10% in the AEx-R and AEx-L groups , respectively ., In addition , the symmetry of the Alu exons ( i . e . , divisibility-by-three ) , at least in the AEx-R dataset , is very low: Only 23% of the exons are symmetric ( in the AEx-L dataset 55% of the Alus are symmetric ) ., Thus , the majority of Alus in this dataset insert a frame-shift mutation ., These numbers contrast with the 73% symmetry found in alternative events conserved between human and mouse 49 ., However , since our objective in this research was to understand the requirements of the spliceosome , the potential function of the transcript is irrelevant ., Moreover , newly born alternatively spliced Alu exons are the raw materials for future evolution: Given the right conditions and time , further mutations might generate a functional reading frame ., The features identified here provided good , but not perfect , classification using machine learning ., A number of factors underlie the non-perfect classification: For example , EST data is very noisy and far from providing a comprehensive coverage of all genes in all tissues 50 ., Therefore , many Alus categorized as non-e","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Despite decades of research , the question of how the mRNA splicing machinery precisely identifies short exonic islands within the vast intronic oceans remains to a large extent obscure ., In this study , we analyzed Alu exonization events , aiming to understand the requirements for correct selection of exons ., Comparison of exonizing Alus to their non-exonizing counterparts is informative because Alus in these two groups have retained high sequence similarity but are perceived differently by the splicing machinery ., We identified and characterized numerous features used by the splicing machinery to discriminate between Alu exons and their non-exonizing counterparts ., Of these , the most novel is secondary structure: Alu exons in general and their 5\u2032 splice sites ( 5\u2032ss ) in particular are characterized by decreased stability of local secondary structures with respect to their non-exonizing counterparts ., We detected numerous further differences between Alu exons and their non-exonizing counterparts , among others in terms of exon\u2013intron architecture and strength of splicing signals , enhancers , and silencers ., Support vector machine analysis revealed that these features allow a high level of discrimination ( AUC\\u200a=\\u200a0 . 91 ) between exonizing and non-exonizing Alus ., Moreover , the computationally derived probabilities of exonization significantly correlated with the biological inclusion level of the Alu exons , and the model could also be extended to general datasets of constitutive and alternative exons ., This indicates that the features detected and explored in this study provide the basis not only for precise exon selection but also for the fine-tuned regulation thereof , manifested in cases of alternative splicing .","summary":"A typical human gene consists of 9 exons around 150 nucleotides in length , separated by introns that are \u223c3 , 000 nucleotides long ., The challenge of the splicing machinery is to precisely identify and ligate the exons , while removing the introns ., We aimed to understand how the splicing machinery meets this momentous challenge , based on Alu exonization events ., Alus are transposable elements , of which approximately one million copies exist in the human genome , a large portion of which within introns ., Throughout evolution , some intronic Alus accumulated mutations and became recognized by the splicing machinery as exons , a process termed exonization ., Such Alus remain highly similar to their non-exonizing counterparts but are perceived as different by the splicing machinery ., By comparing exonizing Alus to their non-exonizing counterparts , we were able to identify numerous features in which they differ and which presumably lead to the recognition only of the former by the splicing machinery ., Our findings reveal insights regarding the role of local RNA secondary structures , exon\u2013intron architecture constraints , and splicing regulatory signals ., We integrated these features in a computational model , which was able to successfully mimic the function of the splicing machinery and discriminate between true Alu exons and their intronic counterparts , highlighting the functional importance of these features .","keywords":"computational biology\/alternative splicing","toc":null} +{"Unnamed: 0":1428,"id":"journal.pcbi.1006328","year":2018,"title":"Inter-trial effects in visual pop-out search: Factorial comparison of Bayesian updating models","sections":"In everyday life , we are continuously engaged in selecting visual information to achieve our action goals , as the amount of information we receive at any time exceeds the available processing capacity ., The mechanisms mediating attentional selection enable us to act efficiently by prioritizing task-relevant , and deprioritizing irrelevant , information ., Of importance for the question at issue in the present study , the settings that ensure effective action in particular task episodes are , by default , buffered by the attentional control system and carried over to subsequent task episodes , facilitating performance if the settings are still applicable and , respectively , impairing performance if they no longer apply owing to changes in the task situation ( in which case the settings need to be adapted accordingly ) ., In fact , in visual search tasks , such automatic carry-over effects may account for more of the variance in the response times ( RTs ) than deliberate , top-down task set 1 ., A prime piece of evidence in this context is visual search for so-called singleton targets , that is , targets defined by being unique relative to the background of non-target ( or distractor ) items , whether they differ from the background by one unique feature ( simple feature singletons ) or a unique conjunction of features ( conjunction singletons ) : singleton search is expedited ( or slowed ) when critical properties of the stimuli repeat ( or change ) across trials ., Such inter-trial effects have been found for repetitions\/switches of , for example , the target-defining color 2 , 3 , size 4 , position 5 , and , more generally , the target-defining feature dimension 6 , 7 ., The latter has been referred to as the dimension repetition\/switch effect , that is: responding to a target repeated from the same dimension ( e . g . , color ) is expedited even when the precise target feature is different across trials ( e . g . , changing from blue on one trial to red on the next ) , whereas a target switch from one dimension to another ( e . g . , from orientation to color ) causes a reaction time cost ( \u2018dimension repetition effect\u2019 , DRE ) 8\u201310 ., While inter-trial effects have been extensively studied , the precise nature of the processes that are being affected remains unclear ., Much of the recent work has been concerned with the issue of the processing stage, ( s ) at which inter-trial effects arise ( for a review , see 11 ) ., M\u00fcller and colleagues proposed that inter-trial effects , in particular the dimension repetition effect , reflect facilitation of search processes prior to focal-attentional selection ( at a pre-attentive stage of saliency computation ) 10 ., However , using a non-search paradigm with a single item presented at a fixed ( central ) screen location , Mortier et al . 12 obtained a similar pattern of inter-trial effects\u2013leading them to conclude that the DRE arises at the post-selective stage of response selection ., Rangelov and colleagues 13 demonstrated that DRE effects can originate from distinct mechanisms in search tasks making different task demands ( singleton feature detection and feature discrimination ) : pre-attentive weighting of the dimension-specific feature contrast signals and post-selective stimulus processing\u2013leading them to argue in favor of a multiple weighting systems hypothesis ., Based on the priming of pop-out search paradigm , a similar conclusion 11 has also been proposed , namely , inter-trial effects arise from both attentional selection and post-selective retrieval of memory traces from previous trials 4 , 14 , favoring a dual-stage account 15 ., It is important to note that those studies adopted very different paradigms and tasks to examine the origins of inter-trial effects , and their analyses are near-exclusively based on differences in mean RTs ., Although such analyses are perfectly valid , much information about trial-by-trial changes is lost ., Recent studies have shown that the RT distribution imposes important constraints on theories of visual search 16 , 17 ., RT distributions in many different task domains have been successfully modeled as resulting from a process of evidence accumulation 18 , 19 ., One influential evidence accumulation model is the drift-diffusion model ( DDM ) 20\u201322 ., In the DDM , observers sequentially accumulate multiple pieces of evidence , each in the form of a log likelihood ratio of two alternative decision outcomes ( e . g . , target present vs . absent ) , and make a response when the decision information reaches a threshold ( see Fig 1 ) ., The decision process is governed by three distinct components: a tendency to drift towards either boundary ( drift rate ) , the separation between the decision boundaries ( boundary separation ) , and a starting point ., These components can be estimated for any given experimental condition and observer by fitting the model to the RT distribution obtained for that condition and observer ., Estimating these components makes it possible to address a question that is related to , yet separate from the issue of the critical processing stage, ( s ) and that has received relatively less attention: do the faster RTs after stimulus repetition reflect more efficient stimulus processing , for example: expedited guidance of attention to more informative parts of the stimulus , or rather a bias towards giving a particular one of the two alternative responses or , respectively , a tendency to require less evidence before issuing either response ., The first possibility , more efficient processing , would predict an increase in the drift rate , that is , a higher speed of evidence accumulation ., A bias towards one response or a tendency to require less evidence would , on the other hand , predict a decreased distance between the starting point and the decision boundary associated with that response ., In the case of bias , this would involve a shift of the starting point towards that boundary , while a tendency to require less evidence would be reflected in a decrease of the boundary separation ., While response bias is more likely associated with changes at the post-selective ( rather than pre-attentive ) processing stage , the independence of the response selection and the attentional selection stage has been challenged 23 ., For simple motor latencies and simple-detection and pop-out search tasks 24 , there is another parsimonious yet powerful model , namely the LATER ( Linear Approach to Threshold with Ergodic Rate ) model 25 , 26 ., Unlike the drift-diffusion model , which assumes that evidence strength varies across the accumulative process , the LATER model assumes that evidence is accumulated at a constant rate during any individual perceptual decision , but that this rate varies randomly across trials following a normal distribution ( see Fig 1 ) ., Such a pattern has been observed , for instance , in the rate of build-up of neural activity in the motor cortex of monkeys performing a saccade-to-target task 27 ., Similar to the DDM , the LATER model has three important parameters: the ergodic rate, ( r ) , the boundary separation ( \u03b8 ) , and a starting point ( S0 ) ., However , the boundary separation and starting point are not independent , since the output of the model is completely determined by the rate and the separation between the starting point and the boundary; thus , in effect , the LATER model has only two parameters ., The evidence accumulation process can be interpreted in terms of Bayesian probability theory 26 , 28 ., On this interpretation , the linear approach to threshold with ergodic rate represents the build-up of the posterior probability that results from adding up the log likelihood ratio ( i . e . , evidence ) of a certain choice being the correct one and the initial bias that derives from the prior probability of two choices ., The prior probability should affect the starting point S0 of the evidence accumulation process: S0 should be the closer to the boundary the higher the prior probability of the outcome that boundary represents ., The drift rate , by contrast , should be influenced by any factor that facilitates or impedes efficient accumulation of task-relevant sensory evidence , such as spatial attentional selection ., The present study was designed to clarify the nature of the inter-trial effects for manipulations of target presence and the target-defining dimension as well as inter-trial dimension repetitions and switches ., If inter-trial effects reflect a decision bias , this should be reflected in changes of the decision boundary and\/or the starting point ., By contrast , if inter-trial effects reflect changes in processing efficiency , which might result from allocating more attentional resources ( or weight ) to the processing of the repeated feature\/dimension 6 , the accumulation rate r should be changed ., Note that neither the DDM nor the LATER model provides any indication of how the initial starting point might change across trials ., Given that the inter-trial effects are indicative of the underlying trial-by-trial dynamics , we aimed to further analyze trial-wise changes of the prior and the accumulation rate , and examine how a new prior is learned when the stimulus statistics change , as reflected in changes of the starting point to decision boundary separation during the learning process ., To address these inter-trial dynamics , we adopted the Dynamic Belief Model ( DBM ) 29 ., The DBM has been successfully used to explain why performance on many tasks is better when a stimulus matches local patterns in the stimulus history even in a randomized design where it is not actually possible to use stimulus history for ( better-than-chance ) prediction ., Inter-trial effects arise naturally in the DBM ., This is because the DBM assumes a prior belief about non-stationarity , that is: participants are updating their beliefs about the current stimulus statistics while assuming that these can change at any time ., The assumption of non-stationarity leads to something similar to exponential discounting of previous evidence , that is , the weight assigned to previous evidence decreases exponentially with the time ( or number of updating events ) since it was acquired ., Consequently , current beliefs about what is most likely to happen on an upcoming trial will always be significantly influenced by what occurred on the previous trial , resulting in inter-trial effects ., Thus , here we combine a belief-updating model closely based on the DBM , for modelling the learning of the prior , with the DDM and , respectively , the LATER model for predicting RTs ., A very similar model has previously been proposed to explain results in saccade-to-target experiments 30 ., We also consider the possibility that the evidence accumulation rate as well as the starting point may change from trial to trial ., To distinguish between different possible ways in which stimulus history could have an influence via updating of the starting point and\/or the rate , we performed three visual search experiments , using both a detection and a discrimination task and manipulating the probability of target presence , as well as the target-defining dimension ., Based on the RT data , we then performed a factorial model comparison ( cf . 31 ) , where both the response history and the history of the target dimension can affect either the starting point or the rate ., The results show that the model that best explains both the effects of our probability manipulation and the inter-trial effects is the one in which the starting point is updated based on response history and the rate is updated based on the history of the target dimension ., The singleton search was quite easy , with participants making few errors overall: mean error rates were 1 . 5% , 2 . 5% , and 3 . 3% in Experiments 1 , 2 , and 3 respectively ( Fig 2 ) ., Despite the low average error rates , error rates differed significantly between blocks in both Experiments 1 and 2 F ( 1 . 34 , 14 . 78 ) =11 . 50 , p<0 . 01 , \u03b7p2=0 . 51 , BF=8372 , and F ( 2 , 22 ) =12 . 20 , p<0 . 001 , \u03b7p2=0 . 53 , BF=3729 , respectively: as indicated by post-hoc comparisons ( S1 Text ) , error rates were higher in the low-frequency blocks compared to the medium- and high-frequency blocks , without a significant difference between the latter ., In addition , in Experiment 1 , error rates were overall higher for target-present than for target-absent trials , that is , there were more misses than false alarms , F ( 1 , 11 ) =11 . 43 , p<0 . 01 , \u03b7p2=0 . 51 , BF=75 ., In contrast , there was no difference in error rates between color and orientation targets in Experiment 2 , F ( 1 , 11 ) = 0 . 70 , p = 0 . 42 , BF = 0 . 33 ., In Experiment 3 , there was no manipulation of target ( or dimension ) frequency , but like in Experiment 1 , error rates were higher on target-present than on target-absent trials , t ( 11 ) = 4 . 25 , p < 0 . 01 , BF = 30 . 7; and similar to Experiment 2 , there was no significant difference in error rates between color and orientation targets , t ( 11 ) = 1 . 51 , p = 0 . 16 , BF = 0 . 71 ., Given the low error rates , we analyzed only RTs from trials with a correct response , though excluding outliers , defined as trials on which the inverse RT ( i . e . , 1\/RT ) was more than three standard deviations from the mean for any individual participant ., Fig 3 presents the pattern of mean RTs for all three experiments ., In both Experiments 1 and 2 , the main effect of frequency was significant F ( 2 , 22 ) =10 . 25 , p<0 . 001 , \u03b7p2=0 . 48 , BF=73 , and , respectively , F ( 1 . 27 , 13 . 96 ) =29 . 83 , p<0 . 01 , \u03b7p2=0 . 73 , BF=8 . 7*108 ., Post-hoc comparisons ( see S2 Text ) confirmed RTs to be faster in high-frequency compared to low-frequency blocks , indicative of participants adapting to the stimulus statistics in a way such as to permit faster responses to the most frequent type of trial within a given block ., In addition , in Experiment 1 , RTs were faster for target-present than for target-absent trials F ( 1 , 11 ) =5 . 94 , p<0 . 05 , \u03b7p2=0 . 35 , BF=51 , consistent with the visual search literature ., In contrast , there was no difference between color- and orientation-defined target trials in Experiment 2 , and no interaction between target condition and frequency in either Experiment 1 or 2 ( S2 Text ) \u2013suggesting that the effect of frequency is independent of the target stimuli ., Comparing the error rates depicted in Fig 2 and the mean RTs in Fig 3 , error rates tended to be lower for those frequency conditions for which RTs were faster ., While this rules out simple speed-accuracy trade-offs , it indicates that participants were adapting to the statistics of the stimuli in a way that permitted faster and more accurate responding to the most frequent type of trial within a given block , at the cost of slower and less accurate responding on the less frequent trial type ., A possible explanation of these effects is a shift of the starting point of a drift-diffusion model towards the boundary associated with the response associated with the most frequent type of trial; as will be seen below ( in the modeling section ) , the shapes of the RT distributions were consistent with this interpretation ., Without a manipulation of frequency , Experiment 3 yielded a standard outcome: all three types of trial yielded similar mean RTs , F ( 2 , 22 ) = 2 . 15 , p = 0 . 14 , BF = 0 . 71 ., This is different from Experiment 1 , in which target-absent RTs were significantly slower than target-present RTs ., This difference was likely obtained because the target-defining dimension was kept constant within short mini-blocks in Experiment 1 , but varied randomly across trials in Experiment 3 , yielding a dimension switch cost and therefore slower average RTs on target-present trials ( see modeling section for further confirmation of this interpretation ) ., Given our focus on inter-trial dynamic changes in RTs , we compared trials on which the target condition was switched to trials on which it was repeated from the previous trial ., Fig 4 illustrates the inter-trial effects for all three experiments ., RTs were significantly faster on target-repeat than on target-switch trials , in all experiments: Experiment 1 F ( 1 , 11 ) =6 . 13 , p<0 . 05 , \u03b7p2=0 . 36 , BF=0 . 81 , Experiment 2 F ( 1 , 11 ) =71 . 29 , p<0 . 001 , \u03b7p2=0 . 87 , BF=2 . 6*107 , and Experiment 3 F ( 1 , 11 ) =32 . 68 , p<0 . 001 , \u03b7p2=0 . 75 , BF=625 ., Note that for Experiment 1 , despite the significant target-repeat\/switch effect , the \u2018inclusion\u2019 BF ( see Methods ) suggests that this factor is negligible compared to other factors; a further post-hoc comparison of repeat versus switch trials has a BF of 5 . 88 , compatible with the ANOVA test ., The target repetition effect in all three experiments is consistent with trial-wise updating of an internal model ( see the modeling section ) ., The target repetition\/switch effect was larger for target-absent responses ( i . e . , comparing repetition of target absence to a switch from target presence to absence ) than for target-present responses in Experiment 3 ( interaction inter-trial condition x target condition , F ( 1 , 11 ) =14 . 80 , p<0 . 01 , \u03b7p2=0 . 57 , BF=18 ) , while there was no such a difference in Experiment 1 , F ( 1 , 11 ) = 2 . 55 , p = 0 . 14 , BF = 0 . 43 , and also no interaction between target dimension and inter-trial condition in Experiment 2 , F ( 1 , 11 ) = 0 . 014 , p = 0 . 91 , BF = 0 . 76 ., These findings suggest that , while the target repetition\/switch effect as such is stable across experiments , its magnitude may fluctuate depending on the experimental condition ., The interaction between target condition and inter-trial condition seen in Experiment 3 , but not in Experiment 1 , is likely attributable to the fact that color and orientation targets were randomly interleaved in Experiment 3 , so that target-present repetitions include trials on which the target dimension did either repeat or change\u2013whereas the target dimension was invariably repeated on consecutive target-present trials in Experiment 1 ., The effects of repeating\/switching the target dimension are considered further below ., Note that in all experiments , we mapped two alternative target conditions to two fixed alternative responses ., The repetition and switch effects described above may be partly due to response repetitions and switches ., To further examine dimension repetition\/switch effects when both dimensions were mapped to the same response , we extracted those target-present trials from Experiment 3 on which a target was also present on the immediately preceding trial ., Fig 5 depicts the mean RTs for the dimension-repeat versus -switch trials ., RTs were faster when the target dimension repeated compared to when it switched , F ( 1 , 11 ) =25 . 06 , p<0 . 001 , \u03b7p2=0 . 70 , BF=1905 , where this effect was of a similar magnitude for color- and orientation-defined targets interaction target dimension x dimension repetition , F ( 1 , 11 ) = 0 . 44 , p = 0 . 84 , BF = 0 . 33 ., There was also no overall RT difference between the two types of target main effect of target dimension , F ( 1 , 11 ) = 0 . 16 , p = 0 . 69 , BF = 0 . 34 , indicating that the color and orientation targets were equally salient ., This pattern of dimension repetition\/switch effects is in line with the dimension-weighting account 8 ., Of note , there was little evidence of a dimension repetition benefit from two trials back , that is , from trial n-2 to trial n: the effect was very small ( 3 ms ) and not statistically significant t ( 23 ) = 0 . 81 , p = 0 . 43 , BF = 0 . 38 ., In addition to inter-trial effects from repetition versus switching of the target dimension , there may also be effects of repeating\/switching the individual target-defining features ., To examine for such effects , we extracted those trials on which a target was present and the target dimension stayed the same as on the preceding trial , and examined them for ( intra-dimension ) target feature repetition\/switch effects ., See Fig 6 for the resulting mean RTs ., In Experiments 1 and 3 , there was no significant main effect of feature repetition\/switch Exp . 1: F ( 1 , 11 ) = 0 . 30 , p = 0 . 593 , BF = 0 . 30 , Exp . 3: F ( 1 , 11 ) = 3 . 77 , p = 0 . 078 , BF = 0 . 76 , nor was there an interaction with target dimension Exp . 1: F ( 1 , 11 ) = 2 . 122 , p = 0 . 17 , BF = 0 . 44 , Exp . 3: F ( 1 , 11 ) = 0 . 007 , p = 0 . 93 , BF = 0 . 38 ., In contrast , in Experiment 2 ( which required an explicit target dimension response ) , RTs were significantly faster when the target feature repeated compared to when it switched within the same dimension , F ( 1 , 11 ) =35 . 535 , p<0 . 001 , \u03b7p2=0 . 764 , BF=13 , and this effect did not differ between the target-defining , color and orientation , dimensions , F ( 1 , 11 ) = 1 . 858 , p = 0 . 2 , BF = 0 . 57 ., Note though that , even in Experiment 2 , this feature repetition\/switch effect was smaller than the effect of dimension repetition\/switch ( 20 vs . 54 ms , t ( 11 ) = 5 . 20 , p<0 . 001 , BF = 122 ) ., In summary , the results revealed RTs to be expedited when target presence or absence or , respectively , the target-defining dimension ( on target-present trials ) was repeated on consecutive trials ., However , the origin of these inter-trial effects is unclear: The faster RTs for cross-trial repetitions could reflect either more efficient stimulus processing ( e . g . , as a result of greater \u2018attentional \u2018weight\u2019 being assigned to a repeated target dimension ) or a response bias ( e . g . , an inclination to respond \u2018target present\u2019 based on less evidence on repeat trials ) , or both ., In the next section , we will address the origin ( s ) of the inter-trial effects by comparing a range of generative computational models and determining which parameters are likely involved in producing these effects ., Because feature-specific inter-trial effects , if reliable at all ( they were significant only in Exp . 2 , which required an explicit target dimension response ) , were smaller than the inter-trial effects related to either target presence\/absence or the target-defining dimension ( e . g . , in Exp . 3 , a significant dimension-based inter-trial effect of 39 ms compares with a non-significant feature-based effect of 11 ms ) , we chose to ignore the feature-related effect in our modeling attempt ., With the full combination of the four factors , there were 144 ( 2 x 2 x 6 x 6 ) models altogether for comparison: non-decision time ( with\/without ) , evidence accumulation models ( DDM vs . LATER ) , RDF-based updating ( 6 factor levels ) , and TDD-based updating ( 6 factor levels ) ., We fitted all models to individual-participant data across the three experiments , which , with 12 participants per experiment , yielded 5184 fitted models ( see S7 Text for RT distributions and model fits for the factor levels with no updating but with a non-decision time ) ., Several data sets could not be fitted with the full memory version of the starting point updating level ( i . e . , Level 2 ) of the dimension-based updating factor , due to the parameter updating to an extreme ., We therefore excluded this level from further comparison ., To obtain a better picture of the best model predictions , we plotted predicted versus observed RTs in Fig 11 ., Each point represents the average RT over all trials from one ratio condition , one trial condition , and one inter-trial condition in a single participant ., There are 144 points each for Experiments 1 and 2 ( 12 participants x 3 ratios x 2 trial conditions x 2 inter-trial conditions ) and 108 for Experiment 3 ( 12 participants x 3 trial conditions x 3 inter-trial conditions ) ., The predictions were made based on the best model for each experiment , in terms of the average AIC ( see Figs 8 , 9 and 10 ) ., The r2 value of the best linear fit is 0 . 85 for Experiment 1 , 0 . 86 for Experiment 2 , and 0 . 98 for Experiment 3 , and 0 . 89 for all the data combined ., Fig 12 presents examples of how the starting point ( S0 ) and rate were updated according to the best model ( in AIC terms ) for each experiment ., For all experiments , the best model used starting point updating based on the response-defining feature ( Fig 12A , 12C and 12E , left panels ) ., In Experiments 1 and 2 , the trial samples shown were taken from blocks with an unequal ratio; so , for the starting point , the updating results are biased towards the ( correct ) response on the most frequent type of trial ( Fig 12A and 12C ) ., In Experiment 3 , the ratio was equal; so , while the starting point exhibits a small bias on most trials ( Fig 12E ) , it is equally often biased towards either response ., Since , in a block with unequal ratio , the starting point becomes biased towards the most frequent response , the model predicts that the average starting point to boundary separation for each response will be smaller in blocks in which that response is more frequent ., This predicts that RTs to a stimulus requiring a particular response should become faster with increasing frequency of that stimulus in the block , which is what we observed in our behavioral data ., In addition , since , after each trial , the updating rule moves the starting point towards the boundary associated with the response on that trial , the separation between the starting point and the boundary will be smaller on trials on which the same response was required on the previous trial , compared to a response switch ., This predicts faster RTs when the same response is repeated , in line with the pattern in the behavioral data ., The forgetting mechanism used in the best models ensures that such inter-trial effects will occur even after a long history of previous updates ., In Experiment 1 , the best model did not use any updating of the drift rate , but a different rate was used for each dimension and for target-absent trials ( Fig 12B ) ., In Experiment 2 the best model updated the rate based on the \u2018Rate with decay\u2019 rule described above ., The rate is increased when the target-defining dimension is repeated , and decreased when the dimension switches , across trials , and these changes can build up over repetitions\/switches , though with some memory decay ( Fig 12D ) ., Since the target dimension was ( also ) the response-defining feature in Experiment 2 , the rate updating would contribute to the \u2018response-based\u2019 inter-trial effects ., In Experiment 3 , the best model involved the \u2018Weighted rate\u2019 rule ., Note that the rate tends to be below the baseline level ( dashed lines ) after switching from the other dimension , but grows larger when the same dimension is repeated ( Fig 12F ) ., This predicts faster RTs after a dimension repetition compared to a switch , which is what we observed in the behavioral data ., In three experiments , we varied the frequency distribution over the response-defining feature ( RDF ) of the stimulus in a visual pop-out search task , that is , target presence versus target absence ( Experiments 1 and 3 ) or , respectively , the dimension , color versus orientation , along which the target differed from the distractors ( Experiment 2 ) ., In both cases , RTs were overall faster to stimuli of that particular response-defining feature that occurred with higher frequency within a given trial block ., There were also systematic inter-trial \u2018history\u2019 effects: RTs were faster both when the response-defining feature and when the target-defining dimension repeated across trials , compared to when either of these changed ., Our results thus replicate previous findings of dimension repetition\/switch effects 6 , 9 ., In contrast to studies on \u2018priming of pop-out\u2019 ( PoP ) 3 , 32\u201334 , we did not find significant feature-based repetition\/switch effects ( consistent with 6 ) , except for Experiment 2 in which the target dimension was also the response-defining feature ., The dimension repetition\/switch effects that we observed were also not as \u2018long-term\u2019 compared to PoP studies , where significant feature \u2018priming\u2019 effects emerged from as far as eight trials back from the current trial ., There are ( at least ) two differences between the present study and the PoP paradigms , which likely contributed to these differential effect patterns ., First , we employed dense search displays ( with a total of 39 items , maximizing local target-to-non-target feature contrast ) , whereas PoP studies typically use much sparser displays ( e . g . , in the \u2018prototypical\u2019 design of Maljkovic & Nakayama 3 , 32\u201334 , 3 widely spaced items: one target and two distractors ) ., Second , the features of our distractors remained constant , whereas in PoP studies the search-critical features of the target and the distractors are typically swapped randomly across trials ., There is evidence indicating that , in the latter displays , the target is actually not the first item attended on a significant proportion of trials ( according to 35 , on some 20% up to 70% ) , introducing an element of serial scanning especially on feature swap trials on which there is a tendency for attention ( and the eye ) to be deployed to a distractor that happens to have the same ( color ) feature as the target on the previous trial ( for eye movement evidence , see , e . g . , 36 , 37 ) ., Given this happens frequently , feature checking would become necessary to ensure that it is the ( odd-one-out ) target item that is attended and responded to , rather than one of the distractors ., As a result , feature-specific effects would come to the fore , whereas these would play only a minor role when the target can be reliably found based on strong ( local ) feature contrast 38 ., For this reason , we opted to start our modeling work with designs that , at least in our hand , optimize pop-out ( see also 39 ) , focusing on simple target detection and \u2018non-compound\u2019 discrimination tasks in the first instance ., Another difference is that we used simple detection and \u2018non-compound\u2019 discrimination tasks in our experiments , while PoP experiments typically employ \u2018compound\u2019 tasks , in which the response-defining feature is independent of the target-defining feature ., We do not believe that the latter difference is critical , as reliable dimension repetition\/change effects have also been observed with compound-search tasks ( e . g . , 40 ) , even though , in terms of the final RTs , these are weaker compared to simple response tasks because they are subject to complex interactions arising at a post-selective processing stage ( see below and 41 , 42 ) ., To better understand the basis of the effects we obtained , we analyzed the shape of the RT distributions , using the modified LATER model 26 and the DDM 21 , 22 ., Importantly , in addition to fitting these models to the RT distribution across trials , we systematically compared and contrasted different rules of how two key parameters of the LATER\/DDM models\u2013the starting point ( S0 ) or the rate ( r ) of the evidence accumulation process\u2013might be dynamically adapted , or updated , based on trial history ., We assumed two aspects of the stimuli to be potentially relevant for updating the evidence accumulation parameters: the response-defining feature ( RDF ) and the target-defining dimension ( TDD; in Experiment 2 , RDF and TDD were identical ) ., Thus , in our full factorial model comparison , trial-by-trial updating was based on either the response-defining feature or the target dimension ( factor 1 ) , combined with updating of either the starting point or the rate of evidence accumulation ( factor 2 ) , with a number of different possible updating rules for each of these ( 6 factor levels each ) ., An additional factor ( factor 3 ) in our model comparison was the evidence accumulation model used to predict RT distributions: either the DDM or the LATER model ., Finally , to compare the DDM and LATER models on as equal terms as possible , we modified the original LATER model by adding a non-decision time component ., Thus , the fourth and final factor concerned w","headings":"Introduction, Results, Discussion, Methods","abstract":"Many previous studies on visual search have reported inter-trial effects , that is , observers respond faster when some target property , such as a defining feature or dimension , or the response associated with the target repeats versus changes across consecutive trial episodes ., However , what processes drive these inter-trial effects is still controversial ., Here , we investigated this question using a combination of Bayesian modeling of belief updating and evidence accumulation modeling in perceptual decision-making ., In three visual singleton ( \u2018pop-out\u2019 ) search experiments , we explored how the probability of the response-critical states of the search display ( e . g . , target presence\/absence ) and the repetition\/switch of the target-defining dimension ( color\/ orientation ) affect reaction time distributions ., The results replicated the mean reaction time ( RT ) inter-trial and dimension repetition\/switch effects that have been reported in previous studies ., Going beyond this , to uncover the underlying mechanisms , we used the Drift-Diffusion Model ( DDM ) and the Linear Approach to Threshold with Ergodic Rate ( LATER ) model to explain the RT distributions in terms of decision bias ( starting point ) and information processing speed ( evidence accumulation rate ) ., We further investigated how these different aspects of the decision-making process are affected by different properties of stimulus history , giving rise to dissociable inter-trial effects ., We approached this question by, ( i ) combining each perceptual decision making model ( DDM or LATER ) with different updating models , each specifying a plausible rule for updating of either the starting point or the rate , based on stimulus history , and, ( ii ) comparing every possible combination of trial-wise updating mechanism and perceptual decision model in a factorial model comparison ., Consistently across experiments , we found that the ( recent ) history of the response-critical property influences the initial decision bias , while repetition\/switch of the target-defining dimension affects the accumulation rate , likely reflecting an implicit \u2018top-down\u2019 modulation process ., This provides strong evidence of a disassociation between response- and dimension-based inter-trial effects .","summary":"When a perceptual task is performed repeatedly , performance becomes faster and more accurate when there is little or no change of critical stimulus attributes across consecutive trials ., This phenomenon has been explored in previous studies on visual \u2018pop-out\u2019 search , showing that participants can find and respond to a unique target object among distractors faster when properties of the target are repeated across trials ., However , the processes that underlie these inter-trial effects are still not clearly understood ., Here , we approached this question by performing three visual search experiments and applying mathematical modeling to the data ., We combined models of perceptual decision making with Bayesian updating rules for the parameters of the decision making models , to capture the processing of visual information on each individual trial as well as possible mechanisms through which an influence can be carried forward from previous trials ., A systematic comparison of how well different combinations of models explain the data revealed the best model to assume that perceptual decisions are biased based on the response-critical stimulus property on recent trials , while repetition of the visual dimension in which the target differs from the distractors ( e . g . , color or orientation ) increases the speed of stimulus processing .","keywords":"learning, decision making, reaction time, social sciences, neuroscience, learning and memory, cognitive neuroscience, cognitive psychology, mathematics, probability distribution, computer vision, cognition, memory, vision, computer and information sciences, target detection, probability theory, psychology, biology and life sciences, sensory perception, physical sciences, cognitive science","toc":null} +{"Unnamed: 0":1855,"id":"journal.pbio.2005952","year":2018,"title":"Spatiotemporal coordination of cell division and growth during organ morphogenesis","sections":"The development of an organ from a primordium typically involves two types of processes: increase in cell number through division , and change in tissue shape and size through growth ., However , how these processes are coordinated in space and time is unclear ., It is possible that spatiotemporal regulation operates through a single control point: either on growth with downstream effects on division , or on division with downstream effects on growth ., Alternatively , spatiotemporal regulation could act on both growth and division ( dual control ) , with cross talk between them ., Distinguishing between these possibilities is challenging because growth and division typically occur in a context in which the tissue is continually deforming ., Moreover , because of the correlations between growth and division it can be hard to distinguish cause from effect 1 ., Plant development presents a tractable system for addressing such problems because cell rearrangements make little or no contribution to morphogenesis , simplifying analysis 2 ., A growing plant organ can be considered as a deforming mesh of cell walls that yields continuously to cellular turgor pressure 3 , 4 ., In addition to this continuous process of mesh deformation , new walls are introduced through cell division , allowing mesh strength to be maintained and limiting cell size ., It is thus convenient to distinguish between the continuous expansion and deformation of the mesh , referred to here as growth , and the more discrete process of introducing new walls causing increasing cell number , cell division 5\u20138 ., The developing Arabidopsis leaf has been used as a system for studying cell division control within a growing and deforming tissue ., Developmental snapshots of epidermal cells taken at various stages of leaf development reveal a complex pattern of cell sizes and shapes across the leaf , comprising both stomatal and non-stomatal lineages 9 ., Cell shape analysis suggests that there is a proximal zone of primary proliferative divisions that is established and then abolished abruptly ., Expression analysis of the cell cycle reporter construct cyclin1 Arabidopsis thaliana \u03b2-glucuronidase ( cyc1At-GUS ) 10 shows that the proximal proliferative zone extends more distally in the subepidermal as compared with the epidermal layer ., Analysis of the intensity of cyc1At-GUS , which combines both epidermal and subepidermal layers , led to a one-dimensional model in which cell division is restricted to a corridor of fixed length in the proximal region of the leaf 11 ., The division corridor is specified by a diffusible factor generated at the leaf base , termed mobile growth factor , controlled by expression of Arabidopsis cytochrome P450\/CYP78A5 ( KLUH ) ., Two-dimensional models have been proposed based on growth and cell division being regulated in parallel by a morphogen generated at the leaf base 12 , 13 ., These models assume either a constant cell area at division , or constant cell cycle duration ., The above models represent important advances in understanding the relationships between growth and division , but leave open many questions , such as the relations of divisions to anisotropic growth , variations along both mediolateral and proximodistal axes , variation between cell layers , variation between genotypes with different division patterns , and predictions in relation to mutants that modify organ size , cell numbers , and cell sizes 14 ., Addressing these issues can be greatly assisted through the use of live confocal imaging to directly quantify growth and division 15\u201322 ., Local rates and orientations of growth can be estimated by the rate that landmarks , such as cell vertices , are displaced away from each other ., Cell division can be monitored by the appearance of new walls within cells ., This approach has been used to measure growth rates and orientations for developing Arabidopsis leaves and has led to a tissue-level model for its spatiotemporal control 16 ., Live tracking has also been used to follow stomatal lineages and inform hypotheses for stomatal division control 23 ., It has also been applied during a late stage of wild-type leaf development after most divisions have ceased 24 ., However , this approach has yet to be applied across an entire leaf for extended periods to compare different cell layers and genotypes ., Here , we combine tracking and modelling of 2D growth in different layers of the growing Arabidopsis leaf to study how growth and division are integrated during organ morphogenesis ., We exploit the speechless ( spch ) mutant to allow divisions to be followed in the absence of stomatal lineages , and show how the distribution and rates of growth and cell division vary in the epidermal and subepidermal layers along the proximodistal and mediolateral axes and in time ., We further compare these findings to those of wild-type leaves grown under similar conditions ., Our results reveal spatiotemporal variation in both growth rates and cell properties , including cell sizes , shapes , and patterns of division ., By developing an integrated model of growth and division , we show how these observations can be accounted for by a model in which core components of both growth and division are under spatiotemporal control ., Varying parameters of this model illustrates how changes in organ size , cell size , and cell number are likely interdependent , providing a framework for evaluating growth and division mutants ., Tracking cell vertices on the abaxial epidermis of spch seedlings imaged at about 12-h intervals allowed cells at a given developmental stage to be classified into those that would undergo division ( competent to divide , green , Fig 1A ) , and those that did not divide for the remainder of the tracking period ( black , Fig 1A ) ., During the first time interval imaged ( Fig 1A , 0\u201314 h ) , division competence was restricted to the basal half of the leaf , with a distal limit of about 150 \u03bcm ( all distances are measured relative to the petiole-lamina boundary , Fig 1 ) ., To visualise the fate of cells at the distal limit , we identified the first row of nondividing cells ( orange ) and displayed them in all subsequent images ., During the following time intervals , the zone of competence extended together with growth of the tissue to a distance of about 300 \u03bcm , after which it remained at this position , while orange boundary cells continued to extend further through growth ., Fewer competent cells were observed in the midline region at later stages ., Thus , the competence zone shows variation along the proximodistal and mediolateral axes of the leaf , initially extending through growth to a distal limit of about 300 \u03bcm and disappearing earlier in the midline region ., To monitor execution of division , we imaged spch leaves at shorter intervals ( every 2 h ) ., At early stages , cells executed division when they reached an area of about 150 \u03bcm2 ( Fig 2A , 0\u201324 h ) ., At later stages , cells in the proximal lamina ( within 150 \u03bcm ) continued to execute division at about this cell area ( mean = 151 \u00b1 6 . 5 \u03bcm2 , Fig 2B ) , while those in the more distal lamina or in the midline region executed divisions at larger cell areas ( mean = 203 \u00b1 9 . 7 \u03bcm2 or 243 . 0 \u00b1 22 . 4 \u03bcm2 , respectively , Fig 2A , 2B and 2D ) ., Cell cycle duration showed a similar pattern , being lowest within the proximal 150 \u03bcm of the lamina ( mean = 13 . 9 \u00b1 0 . 8 h ) and higher distally ( mean = 19 . 4 \u00b1 1 . 8 h ) or in the midline region ( 18 . 9 \u00b1 2 . 1 h , Fig 2C and 2E ) ., Within any given region , there was variation around both the area at time of division execution and the cell cycle duration ( Fig 2F and 2G ) ., For example , the area at execution of division within the proximal 150 \u03bcm of the lamina had a mean of about 150 \u03bcm2 , with standard deviation of about 40 \u03bcm2 ( Fig 2F ) ., The same region had a cell cycle duration with a mean of about 14 h and a standard deviation of about 3 h ., Thus , both the area at which cells execute division and cycle duration show variation around a mean , and the mean varies along the proximodistal and mediolateral axes of the leaf ., These findings suggest that models in which either cell area at the time of division or cell cycle duration are fixed would be unable to account for the observed data ., To determine how cell division competence and execution are related to leaf growth , we measured areal growth rates ( relative elemental growth rates 25 ) for the different time intervals , using cell vertices as landmarks ( Fig 1B ) ., Areal growth rates varied along both the mediolateral and proximodistal axis of the leaf , similar to variations observed for competence and execution of division ., The spatiotemporal variation in areal growth rate could be decomposed into growth rates in different orientations ., Growth rates parallel to the midline showed a proximodistal gradient , decreasing towards the distal leaf tip ( Fig 1C and S1A Fig ) ., By contrast , mediolateral growth was highest in the lateral lamina and declined towards the midline , becoming very low there in later stages ( Fig 1D and S1B Fig ) ., The region of higher mediolateral growth may correspond to the marginal meristem 26 ., Regions of low mediolateral growth ( i . e . , the proximal midline ) showed elongated cell shapes ., Models for leaf growth therefore need to account not only for the spatiotemporal pattern of areal growth rates but also the pattern of anisotropy ( differential growth in different orientations ) and correlated patterns of cell shape ., Cell size should reflect both growth and division: growth increases cell size while division reduces cell size ., Cell periclinal areas were estimated from tracked vertices ( Fig 1E ) ., Segmenting a sample of cells in 3D showed that these cell areas were a good proxy for cell size , although factors such as leaf curvature introduced some errors ( for quantifications see S5 Fig , and \u2018Analysis of cell size using 3D segmentation\u2019 in Materials and methods ) ., At the first time point imaged , cell areas were about 100\u2013200 \u03bcm2 throughout most of the leaf primordium ( Fig 1E , left ) ., Cells within the proximal 150 \u03bcm of the lamina remained small at later stages , reflecting continued divisions ., In the proximal 150\u2013300 \u03bcm of the lamina , cells were slightly larger , reflecting larger cell areas at division execution ., Lamina cells distal to 300 \u03bcm progressively enlarged , reflecting the continued growth of these nondividing cells ( Fig 1E and Fig 3A ) ., Cells in the midline region were larger on average than those in the proximal lamina , reflecting execution of division at larger cell areas ( Fig 1E and Fig 3C ) ., Thus , noncompetent cells increase in area through growth , while those in the competence zone retain a smaller size , with the smallest cells being found in the most proximal 150 \u03bcm of the lateral lamina ., Visual comparison between areal growth rates ( Fig 2B ) with cell sizes ( Fig 2E ) suggested that regions with higher growth rates had smaller cell sizes ., Plotting areal growth rates against log cell area confirmed this impression , revealing a negative correlation between growth rate and cell size ( Fig 4B ) ., Thus , rapidly growing regions tend to undergo more divisions ., This relationship is reflected in the pattern of division competence: mean areal growth rates of competent cells in the lamina were higher than noncompetent cells , particularly at early stages ( Fig 3I ) ., However , there was no fixed threshold growth rate above which cells were competent , and for the midline region there was no clear difference between growth rates of competent and noncompetent cells ( Fig 3I ) ., Plotting areal growth rates for competent and noncompetent cells showed considerable overlap ( S6 Fig ) , with no obvious switch in growth rate when cells no longer divide ( become noncompetent ) ., Thus , high growth rate broadly correlates with division competence , but the relationship is not constant for different regions or times ., To determine how the patterns and correlations observed for the epidermis compared to those in other tissues , we analysed growth and divisions in the subepidermis ., The advantage of analysing an adjacent connected cell layer is that unless intercellular spaces become very large , the planar cellular growth rates will be very similar to those of the attached epidermis ( because of tissue connectivity and lack of cell movement ) ., Comparing the epidermal and subepidermal layers therefore provides a useful system for analysing division behaviours in a similar spatiotemporal growth context ., Moreover , by using the spch mutant , one of the major distinctions in division properties between these layers ( the presence of stomatal lineages in the epidermis ) is eliminated ., Divisions in the abaxial subepidermis were tracked by digitally removing the overlying epidermal signal ( the distalmost subepidermal cells could not be clearly resolved ) ., As with the epidermis , 3D segmentation showed that cell areas were a good proxy for cell size , although average cell thickness was greater ( S11 Fig , see also \u2018Analysis of cell size using 3D segmentation\u2019 in Materials and methods ) ., Unlike the epidermis , intercellular spaces were observed for the subepidermis ., As the tissue grew , subepidermal spaces grew and new spaces formed ( Fig 5A\u20135D ) ., Similar intercellular spaces were observed in subepidermal layers of wild-type leaves , showing they were not specific to spch mutants ( S8 Fig ) ., Vertices and intercellular spaces in the subepidermis broadly maintained their spatial relationships with the epidermal vertices ( Fig 5C , 5E and 5F ) ., Comparing the cellular growth rates in the plane for a patch of subepidermis with the adjacent epidermis showed that they were similar ( S9 Fig ) , although the subepidermal rates were slightly lower because of the intercellular spaces ., This correlation is expected , because unless the intercellular spaces become very large , the areal growth rates of the epidermal and subepidermal layers are necessarily similar ., The most striking difference between subepidermal and epidermal datasets was the smaller size of the distal lamina cells of the subepidermis ( compare Fig 6A with Fig 1E , and Fig 3A with Fig 3B ) ., For the epidermis , these cells attain areas of about 1 , 000 \u03bcm2 at later stages , while for the subepidermis they remain below 500 \u03bcm2 ., This finding was consistent with the subepidermal division competence zone extending more distally ( Fig 6B ) , reaching a distal limit of about 400 \u03bcm compared with 300 \u03bcm for the epidermis ., A more distal limit for the subepidermis has also been observed for cell cycle gene expression in wild type 10 ., Moreover , at early stages , divisions occurred throughout the subepidermis rather than being largely proximal , as observed in the epidermis , further contributing to the smaller size of distal subepidermal cells ( S10 Fig ) ., Despite these differences in cell size between layers , subepidermal cell areal growth rates showed similar spatiotemporal patterns to those of the overlying epidermis , as expected because of tissue connectivity ( compare Fig 6C with Fig 1B ) ., Consequently , correlations between growth rate and cell size were much lower for the subepidermis than for the epidermis ( Fig 4B and 4C ) ., This difference in the relationship between growth and cell size in different cell layers was confirmed through analysis of cell division competence ., In the subepidermis , at early stages there was no clear difference between mean growth rates for competent and noncompetent cells ( Fig 3J cyan , green ) , in contrast to what is observed in the epidermis ( Fig 3I cyan , green ) , while at later stages noncompetent cells had a slightly lower growth rate ( Fig 3J yellow , red ) ., To determine how the patterns of growth and division observed in spch related to those in wild type , we imaged a line generated by crossing a spch mutant rescued by a functional SPCH protein fusion ( pSPCH:SPCH-GFP ) to wild type expressing the PIN3 auxin transporter ( PIN3:PIN3-GFP ) , which marks cell membranes in the epidermis 23 ., The resulting line allows stomatal lineage divisions to be discriminated from non-stomatal divisions ( see below ) in a SPCH context ., At early stages , wild-type and spch leaves were not readily distinguishable based on cell size ( S12 Fig ) ., However , by the time leaf primordia attained a width of about 150 \u03bcm , the number and size of cells differed dramatically ., Cell areas in wild type were smaller in regions outside the midline region , compared with corresponding cells in spch ( Fig 7A ) ., Moreover , cell divisions in wild type were observed throughout the lamina that was amenable to tracking ( Fig 7B , 0\u201312 h ) , rather than being largely proximal ., Divisions were observed over the entire lamina for subsequent time intervals , including regions distal to 300 \u03bcm ( Fig 7B , 12\u201357 h ) ., These results indicate that SPCH can confer division competence in epidermal cells outside the proximal zone observed in spch mutants ., To further clarify how SPCH influences cell division , we used SPCH-GFP signal to classify wild-type cells into two types: ( 1 ) Stomatal lineage divisions , which include both amplifying divisions ( cells express SPCH strongly around the time of division and retain expression in one of the daughter cells ) ( S1 Video , orange\/yellow in Fig 7C ) and guard mother cell divisions ( SPCH expression is bright and diffuse during the first hours of the cycle , transiently switched on around time of division , and then switched off in both daughters ) ., ( 2 ) Non-stomatal divisions , in which SPCH expression is much weaker , or only lasts <2 h , and switches off in both daughter cells ( S2 Video , light\/dark green in Fig 7C ) ., If cells with inactive SPCH behave in a similar way in wild-type or spch mutant contexts , we would expect non-stomatal divisions to show similar properties to divisions in the spch mutant ., In the first time interval , non-stomatal divisions ( green ) were observed within the proximal 150 \u03bcm ( Fig 7C , 0\u201312 h ) , similar to the extent of the competence zone in spch ( Fig 1A , 0\u201314h ) ., The zone of non-stomatal divisions then extended to about 250 \u03bcm and became restricted to the midline region ., After leaf width was greater than 0 . 45 mm , we did not observe further non-stomatal divisions in the midline region , similar to the situation in spch leaves at a comparable width ( Fig 1A , 58-74h , 0 . 48 mm ) ., These results suggest that similar dynamics occur in the non-stomatal lineages of wild type and the spch mutant ., To determine how SPCH modulates division , we analysed stomatal and non-stomatal divisions in the lamina ., Considerable variation was observed for both the area at which cells divide ( 25\u2013400 \u03bcm2 ) and cell cycle duration ( 8\u201350 h ) ( S13 Fig ) ., The mean area at which cells execute division was greater for non-stomatal divisions ( about 165 \u00b1 28 \u03bcm2 1 . 96 \u00d7 standard error ) than stomatal divisions ( about 80 \u00b1 6 \u03bcm2 ) ( S13 Fig ) ., Similarly , cell cycle durations were longer for non-stomatal divisions ( about 25 \u00b1 3 h ) compared with stomatal divisions ( about 18 \u00b1 1 h ) ., These results suggest that in addition to conferring division competence , SPCH acts cell autonomously to promote division at smaller cell sizes and\/or for shorter cell cycle durations ., Given the alteration in cell sizes and division patterns in wild type compared to spch , we wondered if these may reflect alterations in growth rates ., When grown on agar plates , spch mutant leaves grow more slowly than wild-type leaves ( S14A Fig ) ., The slower growth of spch could reflect physiological limitations caused by the lack of stomata , or an effect of cell size on growth\u2014larger cells in spch cause a slowing of growth ., However , the tracking data and cell size analysis of spch and wild type described above were carried out on plants grown in a bio-imaging chamber in which nutrients were continually circulated around the leaves ., Growth rates for wild type and spch leaves grown in these conditions were comparable for much of early development , and similar to those observed for wild type on plates ( compare Fig 7D with Fig 1B , S14 Fig ) ., These results suggest that the reduced growth rates of spch compared with wild type at early stages on plates likely reflect physiological impairment caused by a lack of stomata rather than differences in cell size ., As a further test of this hypothesis , we grew fama ( basic helix-loop-helix transcription factor bHLH097 ) mutants , as these lack stomata but still undergo many stomatal lineage divisions 27 ., We found that fama mutants attained a similar size to spch mutants on plates , consistent with the lack of stomata being the cause of reduced growth in these conditions ( S14 Fig ) ., Plots of cell area against growth rates of tracked leaves grown in the chamber showed that , for similar growth rates , cells were about three times smaller in wild type compared with spch ( compare Fig 4A with Fig 4B ) ., Thus , the effects of SPCH on division can be uncoupled from effects on growth rate , at least at early stages of development ., At later stages ( after leaves were about 1 mm wide ) , spch growth in the bio-imaging chamber slowed down compared with wild type , and leaves attained a smaller final size ., This later difference in growth rate might be explained by physiological impairment of spch because of the lack of stomata , and\/or by feedback of cell size on growth rates ., This change in later behaviour may reflect the major developmental and transcriptional transition that occurs after cell proliferation ceases 9 ., The above results reveal that patterns of growth rate , cell division , and cell size and shape exhibit several features in spch: ( 1 ) a proximal corridor of cell division competence , with an approximately fixed distal limit relative to the petiole-lamina boundary; ( 2 ) the distal limit is greater for subepidermal ( 400 \u03bcm ) than epidermal tissue ( 300 \u03bcm ) ; ( 3 ) a further proximal restriction of division competence in the epidermis at early stages that extends with growth until the distal limit of the corridor ( 300 \u03bcm ) is reached; ( 4 ) larger and narrower cells in the proximal midline region of the epidermis; ( 5 ) a proximodistal gradient in cell size in the epidermal lamina; ( 6 ) a negative correlation between cell size and growth rate that is stronger in the epidermis than subepidermis; ( 7 ) variation in both the size at which cells divide and cell cycle duration along both the proximodistal and mediolateral axes; and ( 8 ) variation in growth rates parallel or perpendicular to the leaf midline ., In wild-type plants , these patterns are further modulated by the expression of SPCH , which leads to division execution at smaller cell sizes and extension of competence , without affecting growth rates at early stages ., Thus , growth and division rates exhibit different relations in adjacent cell layers , even in spch , in which epidermal-specific stomatal lineages are eliminated , and division patterns can differ between genotypes ( wild type and spch ) without an associated change in growth rates ., These observations argue against spatiotemporal regulators acting solely on the execution of division , which then influences growth , as this would be expected to give conserved relations between division and growth ., For the same reason , they argue against a single-point-of-control model in which spatiotemporal regulators act solely on growth , which then secondarily influences division ., Instead , they suggest dual control , with spatiotemporal regulators acting on both growth and division components ., With dual control , growth and division may still interact through cross-dependencies , but spatiotemporal regulation does not operate exclusively on one or the other ., To determine how a hypothesis based on dual control may account for all the observations , we used computational modelling ., We focussed on the epidermal and subepidermal layers of the spch mutant , as these lack the complications of stomatal lineages ., For simplicity and clarity , spatiotemporal control was channelled through a limited set of components for growth and division ( Fig 8A ) ., There were two components for growth under spatiotemporal control: specified growth rates parallel and perpendicular to a proximodistal polarity field ( Kpar and Kper , respectively ) 16 ., Together with mechanical constraints of tissue connectivity , these specified growth components lead to a pattern of resultant growth and organ shape change 28 ., There were two components for cell division under spatiotemporal control: competence to divide ( CDIV ) , and a threshold area for division execution that varies around a mean ( \u0100 ) ., Controlling division execution by a threshold cell size ( \u0100 ) introduces a cross-dependency between growth and division , as cells need to grow to attain the local threshold size before they can divide ., The cross-dependency is indicated by the cyan arrow in Fig 8A , feeding information back from cell size ( which depends on both growth and division ) to division ., An alternative to using \u0100 as a component of division-control might be to use a mean cell cycle duration threshold ., However , this would bring in an expected correlation between high growth rates and large cell sizes ( for a given cell cycle duration , a faster-growing cell will become larger before cycle completion ) , which is the opposite trend of what is observed ., Spatiotemporal regulators of growth and division components can be of two types: those that become deformed together with the tissue as it grows ( fixed to the tissue ) and those that maintain their pattern to some extent despite deformation of the tissue by growth ( requiring mobile or diffusible factors ) 28 ., In the previously published growth model , regulatory factors were assumed , for simplicity , to deform with the tissue as it grows 16 ., These factors comprised a graded proximodistal factor ( PGRAD ) , a mediolateral factor ( MID ) , a factor distinguishing lamina from petiole ( LAM ) , and a timing factor ( LATE ) ( S15A and S15B Fig ) ., However , such factors cannot readily account for domains with limits that remain at a constant distance from the petiole-lamina boundary , such as the observed corridors for division competence ., This is because the boundary of a domain that is fixed to the tissue will extend with the tissue as it grows ., We therefore introduced a mobile factor , proximal mobile factor ( PMF ) , that was not fixed to the tissue to account for these behaviours ., This motivation is similar to that employed by others 11\u201313 ., PMF was generated at the petiole-lamina boundary and with appropriate diffusion and decay coefficients such that PMF initially filled the primordium and then showed a graded distribution as the primordium grew larger , maintaining a high concentration in the proximal region and decreasing towards the leaf tip ( S15C and S15D Fig ) ., This profile was maintained despite further growth , allowing thresholds to be used to define domains with relatively invariant distal limits ., Further details of the growth model are given in Materials and methods , and the resultant growth rates are shown in S16 Fig ( compare with Fig 1B and 1D ) ., Cells were incorporated by superimposing polygons on the initial tissue or canvas ( S15A Fig , right ) ., The sizes and geometries of these virtual cells ( v-cells ) were based on cells observed at corresponding stages in confocal images of leaf primordia 16 ., The vertices of the v-cells were anchored to the canvas and displaced with it during growth ., Cells divided according to Errera\u2019s rule: the shortest wall passing through the centre of the v-cell 29 , with noise in positioning of this wall incorporated to capture variability ., V-cells were competent to divide if they expressed factor CDIV , and executed division when reaching a mean cell target area , \u0100 ., As the observed area at time of division was not invariant ( Fig 2F ) , we assumed the threshold area for division varied according to a standard deviation of \u03c3 = 0 . 2\u0100 around the mean ., CDIV and \u0100 are the two core components of division that are under the control of spatiotemporal regulators in the model ( Fig 8A , 8C and 8D ) ., Variation between epidermal and subepidermal patterns reflects different interactions controlling cell division ( interactions colour coded red and blue , respectively , in Fig 8C and 8D ) ., We first modelled cell divisions in the subepidermis , as this layer shows a more uniform pattern of cell sizes ( Fig 3B and Fig 6A ) ., Formation of intercellular spaces was simulated by replacing a random selection of cell vertices with small empty equilateral triangles , which grew at a rate of 2 . 5% h\u22121 , an average estimated from the tracking data ., To account for the distribution of divisions and cell sizes , we assumed that v-cells were competent to divide ( express CDIV ) where PMF was above a threshold value ., This value resulted in the competence zone extending to a distal limit of about 400 \u03bcm ., To account for the proximodistal pattern of cell areas in the lamina ( Fig 3B and Fig 6A ) and larger cells in the midline ( Fig 3D and Fig 6A ) , we assumed that \u0100 was modulated by the levels of PMF , PGRAD , and MID ( Fig 8D , black and blue ) ., These interactions gave a pattern of average v-cell areas and division competence that broadly matched those observed ( compare Fig 8E and 8F with Fig 6A and 6B , and Fig 3F and 3H with 3B and 3D , S3 Video ) ., For the epidermis , the zone of division competence was initially in the proximal region of the primordium and then extended with the tissue as it grew ( Fig 1A ) ., We therefore hypothesised that in addition to division being promoted by PMF , there was a further requirement for a proximal factor that extended with the tissue as it grew ., We used PGRAD to achieve this additional level of control , assuming CDIV expression requires PGRAD to be above a threshold level ( Fig 8C , red and black ) ., V-cells with PGRAD below this threshold were not competent to divide , even in the presence of high PMF ., Thus , at early stages , when PMF was high throughout the primordium , the PGRAD requirement restricted competence to the proximal region of the leaf ( Fig 8H ) ., At later stages , as the PGRAD domain above the threshold extended beyond 300 \u03bcm , PMF became limiting , preventing CDIV from extending beyond about 300 \u03bcm ., To account for the earlier arrest of divisions in the midline region ( Fig 1A ) , CDIV was inhibited by MID when LATE reached a threshold value ( Fig 8C , red ) ., As well as CDIV being regulated , the spatiotemporal pattern of \u0100 was modulated by factors MID and PMF ( Fig 8D black ) ., With these assumptions , the resulting pattern of epidermal divisions and v-cell sizes broadly matched those observed experimentally for the epidermis ( compare Fig 8G with Fig 1E , S4 Video ) ., In particular , the model accounted for the observed increases in cells sizes with distance from the petiole-lamina boundary , which arise because of the proximal restrictions in competence ( compare Fig 3E and 3G with Fig 3A and 3C ) ., The model also accounted for the elongated cell shapes observed in the midline region , which arise through the arrest of division combined with low specified growth rate perpendicular to the polarity ., Moreover , the negative correlations between growth rates and cell size , not used in developing the model , were similar to those observed experimentally ( Fig 4B and 4D ) ., These correlations arise because both growth and division are promoted in proximal regions ., We also measured the cell topology generated by the epidermal model ., It has previously been shown that the frequency of six-sided neighbours observed experimentally for the spch leaf epidermis is very low compared with that for other plant and animal tissues and also with that generated by a previous implementation","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"A developing plant organ exhibits complex spatiotemporal patterns of growth , cell division , cell size , cell shape , and organ shape ., Explaining these patterns presents a challenge because of their dynamics and cross-correlations , which can make it difficult to disentangle causes from effects ., To address these problems , we used live imaging to determine the spatiotemporal patterns of leaf growth and division in different genetic and tissue contexts ., In the simplifying background of the speechless ( spch ) mutant , which lacks stomatal lineages , the epidermal cell layer exhibits defined patterns of division , cell size , cell shape , and growth along the proximodistal and mediolateral axes ., The patterns and correlations are distinctive from those observed in the connected subepidermal layer and also different from the epidermal layer of wild type ., Through computational modelling we show that the results can be accounted for by a dual control model in which spatiotemporal control operates on both growth and cell division , with cross-connections between them ., The interactions between resulting growth and division patterns lead to a dynamic distributions of cell sizes and shapes within a deforming leaf ., By modulating parameters of the model , we illustrate how phenotypes with correlated changes in cell size , cell number , and organ size may be generated ., The model thus provides an integrated view of growth and division that can act as a framework for further experimental study .","summary":"Organ morphogenesis involves two coordinated processes: growth of tissue and increase in cell number through cell division ., Both processes have been analysed individually in many systems and shown to exhibit complex patterns in space and time ., However , it is unclear how these patterns of growth and cell division are coordinated in a growing leaf that is undergoing shape changes ., We have addressed this problem using live imaging to track growth and cell division in the developing leaf of the mustard plant Arabidopsis thaliana ., Using subsequent computational modelling , we propose an integrated model of leaf growth and cell division , which generates dynamic distributions of cell size and shape in different tissue layers , closely matching those observed experimentally ., A key aspect of the model is dual control of spatiotemporal patterns of growth and cell division parameters ., By modulating parameters in the model , we illustrate how phenotypes may correlate with changes in cell size , cell number , and organ size .","keywords":"skin, cell physiology, plant anatomy, medicine and health sciences, integumentary system, cell division analysis, cell cycle and cell division, cell processes, brassica, cell polarity, plant science, model organisms, network analysis, experimental organism systems, epidermis, bioassays and physiological analysis, seedlings, plants, research and analysis methods, arabidopsis thaliana, computer and information sciences, cell analysis, animal studies, regulatory networks, leaves, eukaryota, plant and algal models, cell biology, anatomy, biology and life sciences, organisms","toc":null} +{"Unnamed: 0":54,"id":"journal.pcbi.1000090","year":2008,"title":"CSMET: Comparative Genomic Motif Detection via Multi-Resolution Phylogenetic Shadowing","sections":"We concern ourselves with uncovering motifs in eukaryotic cis-regulatory modules ( CRM ) from multiple evolutionarily related species , such as the members from the Drosophila clade ., Due to high degeneracy of motif instances , and complex motif organization within the CRMs , pattern-matching-based motif search in higher eukaryotes remains a difficult problem , even when representations such as the position weight matrices ( PWMs ) of the motifs are given ., Extant methods that operate on a single genome or simpler organisms such as yeast often yield a large number of false positives , especially when the sequence to be examined spans a long region ( e . g . , tens of thousands of bps ) beyond the basal promoters , where possible CRMs could be located ., As in gene finding , having orthologous sequences from multiple evolutionarily related taxa can potentially benefit motif detection because a reasonable alignment of these sequences could enhance the contrast of sequence conservation in motifs with respect to that of the non-motif regions , However , the alignment quality of non-coding regions is usually significantly worse than that of the coding regions , so that the aligned motif sequences are not reliably orthologous ., This is often unavoidable even for the best possible local alignment software because of the short lengths and weak conservation of TFBSs ., When applying a standard shadowing model on such alignments , motif instances aligned with non-orthologous sequences or gaps can be hard to identify due to low overall shadowing score of the aligned sequences ( Figure 1A ) ., In addition to the incomplete orthology due to imperfect alignment , a more serious concern comes from a legitimate uncertainty over the actual functional orthology of regions that are alignment-wise orthologous ., A number of recent investigations have shown that TFBS loss and gain are fairly common events during genome evolution 8 , 12 ., For example , Patel et al 13 showed that aligned \u201cmotif sites\u201d in orthologous CRMs in the Drosophila clade may have varying functionality in different taxa ., Such cases usually occur in regions with reduced evolutionary constraints , such as regions where motifs are abundant , or near a duplication event ., The sequence dissimilarities of CRMs across taxa include indel events in the spacers , as well as gains and losses of binding sites for TFs such as the bcd-3 and hb-1 motifs in the evenskipped stripe 2 ( eve2 ) ( Figure 1B ) ., A recent statistical analysis of the Zeste binding sites in several Drosophila taxa also revealed existence of large-scale functional turnover 12 ., Nevertheless , the fact that sequence similarity is absent does not necessarily mean that the overall functional effect of the CRM as a whole is vastly different ., In fact , for the Drosophila clade , despite the substantial sequence dissimilarity in gap-gene CRMs such as eve2 , the expression of these gap genes shows similar spatio-temporal stripe patterns across the taxa 8 , 13 ., Although a clear understanding of the evolutionary dynamics underlying such inter- and intra-taxa diversity is still lacking , it is hypothesized that regulatory sequences such as TFBSs and CRMs may undergo adaptive evolution via stabilizing selections acting synergistically on different loci within the sequence elements 8 , 12 , which causes site evolution to be non-iid and non-isotropic across all taxa ., In such a scenario , it is crucial to be able to model the evolution of biological entities not only at the resolution of individual nucleotides , but also at more macroscopic levels , such as the functionality of whole sequence elements such as TFBSs over lineages ., To our knowledge , so far there have been few attempts along this line , especially in the context of motif detection ., The CSMET model presented in this paper intends to address this issue ., Orthology-based motif detection methods developed so far are mainly based on nucleotide-level conservation ., Some of the methods do not resort to a formal evolutionary model 14 , but are guided by either empirical conservation measures 15\u201317 , such as parsimonious substitution events or window-based nucleotide identity , or by empirical likelihood functions not explicitly modeling sequence evolution 4 , 18 , 19 ., The advantage of these non-phylogeny based methods lies in the simplicity of their design , and their non-reliance on strong evolutionary assumptions ., However , since they do not correspond to explicit evolutionary models , their utility is restricted to purely pattern search , and not for analytical tasks such as ancestral inference or evolutionary parameter estimation ., Some of these methods employ specialized heuristic search algorithms that are difficult to scale up to multiple species , or generalize to aligned sequences with high divergence ., Phylogenetic methods such as EMnEM 20 , MONKEY 21 , and our in-house implementation of PhyloHMM ( originally implemented in 1 for gene finding , but in our own version tailored for motif search ) explicitly adopt a complete and independent shadowing model at the nucleotide level ., These methods are all based on the assumption of homogeneity of functionality across orthologous nucleotides , which is not always true even among relatively closely related species ( e . g . , of divergence less than 50 mya in Drosophila ) ., Empirical estimation and simulation of turnover events is an emerging subject in the literature 12 , 22 , but to our knowledge , no explicit evolutionary model for functional turnover has been proposed and brought to bear in comparative genomic search of non-conserved motifs ., Thus our CSMET model represents an initial foray in this direction ., Closely related to our work , two recent algorithms , rMonkey 12\u2014an extension over the MONKEY program , and PhyloGibbs 9\u2014a Gibbs sampling based motif detection algorithm , can also explicitly account for differential functionality among orthologs , both using the technique of shuffling or reducing the input alignment to create well conserved local subalignments ., But in both methods , no explicit functional turnover model has been used to infer the turnover events ., Another recent program , PhyME 10 , partially addresses the incomplete orthology issue via a heuristic that allows motifs only present in a pre-chosen reference taxon to be also detectable , but it is not clear how to generalize this ability to motifs present in arbitrary combination of other taxa , and so far no well-founded evolutionary hypothesis and model is provided to explain the heuristic ., Non-homogeneous conservation due to selection across aligned sites has also been studied in DLESS 23 and PhastCons 24 , but unlike in CSMET , no explicit substitution model for lineage-specific functional evolution was used in these algorithms , and the HMM-based model employed there makes it computationally much more expensive than CSMET to systematically explore all possible evolutionary hypotheses ., A notable work in the context of protein classification proposed a phylogenomic model over protein functions , which employs a regression-like functional to model the evolution of protein functions represented as feature vectors along lineages in a complete phylogeny 25 , but such ideas have not been explored so far for comparative genomic motif search ., Various nucleotide substitution models , including the Jukes-Cantor 69 ( JC69 ) model 26 , and the Felsenstein 81 ( F81 ) model 27 , have been employed in current phylogenetic shadowing or footprinting algorithms ., PhyloGibbs and PhyME use an analogue of F81 proposed in 28 , which is one of the simplest models to handle arbitrary stationary distributions , necessary to model various specific PWMs of motifs ., Both PhyME and PhyloGibbs also offer an alternative to use a simplified star-phylogeny to replace the phylogenetic tree when dealing with a large number of taxa , which corresponds to an even simpler substitution process ., Our CSMET model differs from these existing methods in several important ways ., First , it uses a different evolutionary model based on a coupled-set of both functional and nucleotide substitution processes , rather than a single nucleotide substitution model to score every alignment block ., Second , it uses a more sophisticated and popular nucleotide substitution process based on the Felsenstein84 ( F84 ) model 29 , which captures the transition\/transversion bias ., Third , it employs a hidden Markov model that explicitly models autocorrelation of evolutionary rates on successive sites in the genome ., Fourth , it uses an efficient deterministic inference algorithm that is linear to the length of the input sequence and either exponential ( under a full functional phylogeny ) or linear ( under a star-shaped functional phylogeny ) to the number of the aligned taxa , rather than the Monte Carlo or heuristic search algorithms that require long convergence times ., Essentially , CSMET is a context-dependent probabilistic graphical model that allows a single column in a multiple alignment to be modeled by multiple evolutionary trees conditioned on the functional specifications of each row ( i . e . , the functional identity of a substring in the corresponding taxon ) ( Figure 2 ) ., When conjoined with a hidden Markov model that auto-correlates the choices of different evolutionary rates on the phylogenetic trees at different sites , we have a stochastic generative model of phylogenetically related CRM sequences that allows both binding site turnover in arbitrary subsets of taxa , and coupling of evolutionary forces at different sites based on the motif organizations within CRMs ., Overall , CSMET offers an elegant and efficient way to take into consideration complex evolutionary mechanisms of regulatory sequences during motif detection ., When such a model is properly trained on annotated sequences , it can be used for comparative genomic motif search in all aligned taxa based on a posterior probabilistic inference algorithm ., This model can be also used for de novo motif finding as programs such as PhyloGibbs and PhyME , with a straightforward extension of the inference procedure that couples the training and prediction routines in an expectation-maximization ( EM ) iteration on unannotated sequence alignments ., In this paper , we focus on supervised motif search in higher eukaryotic genomes ., We compare CSMET with representative competing algorithms , including EMnEm , PhyloHMM , PhyloGibbs , and a mono-genomic baseline Stubb ( which uses an HMM on single species ) on both simulated data , and a pre-aligned Drosophila dataset containing 14 developmental CRMs for 11 aligned Drosophila species ., Annotations for motif occurrences in D . melanogaster of 5 gap-gene TFs - Bicoid , Caudal , Hunchback , Kruppel and Knirps - were obtained from the literature ., We show that CSMET outperforms the other methods on both synthetic and real data , and identifies a number of previously unknown occurrences of motifs within and near the study CRMs ., The CSMET program , the data used in this analysis , and the predicted TFBS in Drosophila sequences , are available for download at http:\/\/www . sailing . cs . cmu . edu\/csmet\/ ., At present , biologically validated orthologous motifs and CRMs across multiple taxa are extremely rare in the literature ., In most cases , motifs and CRMs are only known in some well-studied reference taxa such as the Drosophila melanogaster; and their orthologs in other species are deduced from multiple alignments of the corresponding regulatory sequences from these species according to the positions and PWMs of the \u201creference motifs\u201d in the reference taxon ., This is a process that demands substantial manual curation and biological expertise; rarely are the outcomes from such analysis validated in vivo ( but see 8 for a few such validations in some selected Drosophila species where the transgenic platforms have been successfully developed ) ., At best , these real annotations would give us a limited number of true positives across taxa , but they are not suitable for a systematic performance evaluation based on precision and recall over true motif instances ., Thus we first compare CSMET with a carefully chosen collection of competing methods on simulated CRM sequences , where the motif profiles across all taxa are completely known ., We choose to compare CSMET with 3 representative algorithms for comparative genomic motif search , PhyloGibbs , EMnEM , PhyloHMM; and the program Stubb , which is specialized for motif search in eukaryotic CRMs , and in our paper , set to operate in mono-genomic mode ., The rationale for choosing these 4 benchmarks is detailed in the Material and Methods ., We applied CSMET and competing methods to a multi-specific dataset of Drosophila early developmental CRMs and motifs compiled from the literature 38 ., However , in this situation , we score accuracy only on the motifs annotated in Drosophila melanogaster ( rather than in all taxa ) , because they are the only available gold-standard ., Upon concluding this section , we also report some interesting findings by CSMET of putative motifs , some of which only exist in other taxa and do not have known counterparts in melanogaster ., CSMET is a novel phylogenetic shadowing method that can model biological sequence evolution at both nucleotide level at each individual site , and functional level of a whole TFBS ., It offers a principled way of addressing the problem that can seriously compromise the performance of many extant conservation-based motif finding algorithms: motif turnover in aligned CRM sequences from different species , an evolutionary event that results in functional heterogeneity across aligned sequence entities and shatters the basis of conventional alignment scoring methods based on a single function-specific phylogeny ., CSMET defines a new evolution-based score that explicitly models functional substitution along the phylogeny that causes motif turnover , and nucleotide divergence of aligned sites in each taxa under possibly different function-specific phylogenies conditioning on the turnover status of the site in each taxon ., In principle , CSMET can be used to estimate the rate of turnover of different motifs , which can elucidate the history and dynamics of functional diversification of regulatory binding sites ., But we notice that experimentally validated multi-species CRM\/TFBS annotations that support an unbiased estimate of turnover rates are yet to be generated , as currently almost all biologically validated motifs only exist in a small number of representative species in each clade of the tree of life , such as melanogaster in the Drosophila clade ., Manual annotation on CRM alignments , as we used in this paper , tends to bias the model toward conserved motifs ., Thus , at this time , the biological interpretation of evolutionary parameters on the functional phylogeny remains preliminary ., Nevertheless , these estimated parameters do offer important utility from a statistical and algorithmic point of view , by elegantly controlling the trade-off between two competing molecular substitution processes\u2014that of the motif sequence and of the background sequence\u2014at every aligned site across all taxa beyond what is offered in any existing motif evolution model ., Empirically , we find that such modelling is useful in motif detection ., On both synthetic data and 14 CRMs from 11 Drosophila taxa , we find that the CSMET performs competitively against the state-of-the-art comparative genomic motif finding algorithm , PhyloGibbs , and significantly outperforms other methods such as EMnEM , PhyloHMM and Stubb ., In particular , CSMET demonstrates superior performance in certain important scenarios , such as cases where aligned sequences display significant divergence and motif functionalities are apparently not conserved across taxa or over multiple adjacent sites ., We also find that both CSMET and PhyloGibbs significantly outperform Stubb when the latter is naively applied to sequences of all taxa without exploiting their evolutionary relationships ., Our results suggest that a careful exploration of various levels of biological sequence evolution can significantly improve the performance of comparative genomic motif detection ., Recently , some alignment-free methods 19 have emerged which search for conserved TFBS rich regions across species based on a common scoring function , e . g . , distribution of word frequencies ( which in some ways mirrors the PWM of a reference species ) ., One may ask , given perhaps in the future a perfect search algorithm ( in terms of only computational efficiency ) , do we still need explicit model-based methods such as CSMET ?, We believe that even if exhaustive search of arbitrary string patterns becomes possible , models such as CSMET still offer important advantage not only in terms of interpretability and evolutionary insight as discussed above , but possibly also in terms of performance because of the more plausible scoring schemes they use ., This is because it is impractical to obtain the PWM of a motif in species other than a few reference taxa , thus the scores of putative motif instances in species where their own versions of the PWM are not available can be highly inaccurate under the PWM from the reference species due to evolution of the PWM itself in these study species with respect to the PWM in the reference species ., The CSMET places the reference PWM only at the tree root as an equilibrium distribution; for the tree leaves where all study species are placed , the nucleotide substitution model along tree branches allows sequences in each species to be appropriately scored under a species-specific distribution that is different from the reference PWM , thereby increasing its sensitivity to species-specific instantiations of motifs ., A possible future direction for this work lies in developing better approximate inference techniques for posterior inference under the CSMET model , especially under the scenarios of studying sequences from a large clade with many taxa , and\/or searching for multiple motifs simultaneously ., It is noteworthy that our methods can be readily extended for de novo motif detection , for which an EM or a Monte Carlo algorithm can be applied for model-estimation based on the maximum likelihood principle ., Currently we are exploring such extensions ., Also we intend to develop a semi-supervised training algorithm that does not need manual annotation of motifs in other species on the training CRM alignment , so that we can obtain a less biased estimate of the evolutionary parameters of the CSMET model ., A problem with most of the extant motif finders , including the proposed CSMET , is that the length variation of aligned motifs ( e . g . , alignments with gaps ) cannot be accommodated ., In our model , while deletion events may be captured as gaps in the motif alignment , insertion events cannot be captured as the length of the motif is fixed ., This is because in a typical HMM sequence model the state transitions between sites within motifs are designed to be deterministic ., Thus stochastically accommodating gaps ( insertion events ) within motifs is not feasible ., Hence , some of the actual motifs missed by the competing algorithms were \u201cgapped\u201d motifs ., These issues deserve further investigation ., We use the Felsenstein 1984 model ( F84 ) 29 , which is similar to the Hasegawa\u2013Kishino\u2013Yanos 1985 model ( HKY85 ) 44 and widely used in the phylogenetic inference and footprinting literature 5 , 29 , for nucleotide substitution in our motif and background phylogeny ., Formally , F84 is a five-parameter model , based on a stationary distribution \u03c0 \u2261 \u03c0A , \u03c0T , \u03c0G , \u03c0C\u2032 ( which constitutes three free parameters as the equilibrium frequencies sum to ) and the additional parameters \u03ba and \u03b9 which impose the transition\/transversion bias ., According to this model , the nucleotide-substitution probability from an internal node c to its descendent c\u2032 along a tree branch of length b can be expressed as follows: ( 3 ) where i and j denote nucleotides , \u03b4ij represents the Kronecker delta function , and \u03b5ij is a function similar to the Kronecker delta function which is 1 if i and j are both pyrimidines or both purines , but 0 otherwise ., The summation in the denominator concisely computes purine frequency or pyrimidine frequency ., A more intuitive parameterization for F84 involves the overall substitution rate per site \u03bc and the transition\/transversion ratio \u03c1 , which can be easily estimated or specified ., We can compute the transition matrix PN from \u03bc and \u03c1 using Equation 3 based on the following relationship between ( \u03ba , \u03b9 ) and ( \u03bc , \u03c1 ) :To model functional turnover of aligned substrings along functional phylogeny Tf , we additionally define a substitution process over two characters ( 0 and 1 ) corresponding to presence or absence of functionality ., Now we use the single parameter JC69 model 26 for functional turnover due to its simplicity and straightforward adaptability to an alphabet of size, 2 . The transition probability along a tree branch of length \u03b2 ( which represents the product of substitution rate \u03bc and evolution time t , which are not identifiable independently , ) is defined by: ( 4 ) We estimate the evolutionary parameters from training data based on maximum likelihood , details are available in the Text S1 ., A complete phylogenetic tree T \u2261 {\u03c4 , \u03c0 , \u03b2 , \u03bb} with internal nodes {Vi; i\\u200a=\\u200a1:K\u2032} and leaf nodes {Vi; i\\u200a=\\u200aK\u2032+1:K} , where K denotes the total number of nodes ( i . e . , current and ancestral species ) instantiated in the tree and the node indexing follows a breath-first traversal from the root , defines a joint probability distribution of all-node configurations ( i . e . , the nucleotide contents at an aligned site in all species instantiated in the tree ) , which can be written as the following product of nt-substitution probabilities along tree branches: ( 5 ) where Vpa ( i ) denotes the parent-node of the node i in the tree , and the substitution probability PN ( ) is defined by Equation, 3 . For each position l of the multiple alignment , computing the probability of the entire column denoted by Al of aligned nucleotides from species corresponding to the leaves of a phylogenetic tree T ( l ) defined on position l , i . e . , P ( Al|T ( l ) ) , where Al correspond to an instantiation of the leaf nodes {Vi; i\\u200a=\\u200aK\u2032+1:K} , takes exponential time if performed naively , since it involves the marginalization of all the internal nodes in the tree , i . e . , ( 6 ) We use the Felsenstein pruning algorithm 30 , which is a dynamic programming method that computes the probability of a leaf-configuration under a tree from the bottom up ., At each node of the tree , we store the probability of the subtree rooted at that node , for each possible nucleotide at that node ., At the leaves , only the probability for the particular nucleotide instantiated in the corresponding taxon is non-zero , and for all the other nucleotides , it is zero ., Unlike the naive algorithm , the pruning algorithm requires an amount of time that is proportional to the number of leaves in the tree ., We use a simple extension of this algorithm to compute the probabilities of a partial-alignment defined earlier under a marginal phylogeny , which is required in the coupled-pruning algorithm for CSMET , by considering only the leaves instantiated in ( but not in ) that is under a subtree T\u2032 ( l ) that forms the marginal phylogeny we are interested in ., Specifically , let correspond to possible instantiations of the subset of nodes we need to marginalized out ., Since we already how to compute P ( Al|T ( l ) ) via marginalization over internal nodes , we simply further this marginalization over leaf nodes that corresponds to taxa instantiated in , i . e . , ( 7 ) where denotes the leaves instantiated in ., This amounts to replacing the leaf-instantiation step , which was originally operated on all leaves in the Felsenstein pruning algorithm , by a node-summation step over those leaves in ., In fact , in can be easily shown that this is equivalent to performing the Felsenstein pruning only on the partial tree T\u2032 ( l ) that directly shadows , which is a smaller tree than the original T ( l ) , and only requires time ., Under the CSMET model , to perform the forward-backward algorithm for either motif prediction or unsupervised model training , we need to compute the emission probability given each functional state at every alignment site ., This is nontrivial because a CSMET is defined on an alignment block containing whole motifs across taxa rather than on a single alignment-column ., We adopt a \u201cblock-approximation\u201d scheme , where the emission probability of each state at a sequence position , say , t , is defined on an alignment block of length L started at t , i . e . , , where At\u2261 ( A1 ( t ) , A2 ( t ) , \u2026 , AL ( t ) ) , and Al ( t ) denotes the lth column in an alignment block started from position t ., The conditional likelihood At given the nucleotide-evolutionary trees T and Tb coupled by the annotation tree Ta under a particular HMM state st is also hard to calculate directly , because the leaves of the two nucleotide trees are connected by the leaves of the annotation tree ( Figure 2B ) ., However , if the leaf-states of the annotation tree are known , the probability components coming from the two trees become conditionally independent and factor out ( see Equation 2 ) ., Recall that for a motif of length L , the motif tree actually contains L site-specific trees , i . e . , , and the the choice of these trees for every site in the same row ( i . e . , taxon ) , say , in the alignment block At , is coupled by a common annotation state ., Hence , given an annotation vector Zt for all rows of At , we actually calculate the probability of two subset of the rows given two subtrees ( i . e . , marginal phylogenies ) of the original phylogenetic trees for motif and backgrounds , respectively ( Figure 2B ) ., The subset is constructed by simply stacking the DNA bases of those taxon for which the annotation variables indicate that they were generated from the motif tree ., The subtree is constructed by simply retaining the set of nodes which correspond to the chosen subset , and the ancestors thereof ., Similarly we have and ., Hence , we obtain ( 8 ) The probability of a particular leaf-configuration of a tree , be it a partial or complete nucleotide tree , or an annotation tree , can be computed efficiently using the pruning algorithm ., Thus for each configuration of zt , we can readily compute and ., The block emission probability under CSMET can be expressed as: ( 9 ) where we use , , and to make explicit the dependence of the partial blocks and marginal trees on functional indicator vector zt ., We call this algorithm a coupled-pruning algorithm ., Note that in this algorithm we need to sum over a total number of 2M configurations of zt where M is the total number of taxa ( i . e . , rows ) in matrix At ., It is possible to reduce the computational complexity using a full junction tree algorithm on CSMET , which will turn the graphical model underlying CSMET into a clique tree of width ( i . e . , maximum clique size ) possibly smaller than M . But this algorithm is complicated and breaks the modularity of the tree-likelihood calculation by the coupled-pruning algorithm ., In typical comparative genomic analysis , we expect that M will not be prohibitively large , so our algorithm may still be a convenient and easy-to-implement alternative to the junction-tree algorithm ., Also this computation can be done off-line and in parallel ., Given the emission probabilities for each ancestral functional state at each site , we use the forward-backward algorithm for posterior decoding of the sequence of ancestral functional states along the input CRM alignment of length N . The procedure is the same as in a standard HMM applied to a single sequence , except that now the emission probability at each site , say with index t , is defined by the CSMET probability over an alignment block At at that position under an ancestral functional state , rather than the conditional probability of a single nucleotide observed at position t as in the standard HMM ., The complexity of this FB-algorithm is O ( Nk2 ) where k denotes the total number of functional states ., In this paper , we only implemented a simple HMM with one type motif allowed on either strand , so that k\\u200a=\\u200a3 . We defer a more elaborate implementation that allows multiple motifs and encodes sophisticated CRM architecture as in LOGOS 33 to a future extension ., Given an estimate of , we can infer the MAP estimates of \u2014the functional annotation of every site t in every taxon i of the alignment ., Specifically , the posterior probability of a column of functional states Zt under ancestral functional state can be expressed as: ( 10 ) Recall that in the coupled-pruning algorithm , we can readily compute all the three conditional probability terms in the above equation ., Performing posterior inference allows us to make motif predictions in two ways ., A simple way is look at blocks in the alignment at which the posterior inference produces ones , and predict those to be motifs ., Alternatively , we can also use the inferred state of the alignment block together with the inferred ancestral state to compute a probability score ( as a heuristic ) based on the functional annotation tree ., The score for the block is the sum of probabilities of each block element being one ., Given blocks of aligned substrings {At} containing motif instances in at least one of the aligned taxa , in principle we can estimate both the annotation tree Tf \u2261 {\u03b1 , \u03c4f , \u03b2f} and the motif trees Tm \u2261 {\u03b8 , \u03c4m , \u03b2m , \u03bbm} based on a maximum likelihood principle ., But since in our case most training CRM sequences do not have enough motif data to warrant correct estimation of the motif and function tree , we use the topology and branch lengths of a tree estimated by fastDNAml 36 from the entire CRM sequence alignment ( containing both motif and background ) as the common basis to build the Tf and Tm ., Specifically , fastDNAml estimates a maximum likelihood tree under the F84 model from the entire CRM alignment; we then scale the branch lengths of this tree to get the sets of branch lengths for Tf and Tm by doing a simple linear search ( see below ) of the scaling coefficient that maximize the likelihood of aligned motif sequences and aligned annotation sequences , under the Tm and Tf ( scaled based on the coefficients ) respectively ., For simplicity , we estimate the background tree Tb \u2261 {\u03b8 , \u03c4b , \u03b2b , \u03bbb} separately from only aligned background sequences that are completely orthologous ( i . e . , containing no motifs in any taxon ) ., For both motifs and background phylogenies , the Felsenstein rate parameter \u03bc for the corresponding nucleotide substitution models must also be estimated from the training data ., More technically , note that for Tm the scaling coefficient \u03b2 and the rate parameter \u03bc form a product in the expression of the substitution probability ( see Equation 3 ) and are not identifiable independently ., Thus we only need to estimate the compound rate parameter \u03bc\u2032\\u200a=\\u200a\u03bc\u03b2 ., Ideally , the optimal value of the \u03bc\u2032 should be obtained by performing a gradient descent on the likelihood under the corresponding phylogeny with respect to \u03bc\u2032 ., However , due to the phylogenetic tree probability terms involved in the likelihood computation , there is no closed form expression for the gradient that can be evaluated for a specific value of the compound rate parameter to determine the direction to choose for optimization ., Therefore , to find an approximation to the optimal value of \u03bc\u2032 , we perform a simple linear search in the space of \u03bc\u2032 as follows: and are lower and upper bounds respectively on the space of \u03bc\u2032 that is searched , and are heuristically chosen based on observation ., The step \u03b4 can be chosen to be as small as desired or is allowabl","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Functional turnover of transcription factor binding sites ( TFBSs ) , such as whole-motif loss or gain , are common events during genome evolution ., Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level , and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities ., As a result , comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge , especially in higher eukaryotes , where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms , which can be difficult to generalize and hard to interpret based on phylogenetic principles ., We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees , or CSMET , which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon ., The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides , but of the overall functionality ( e . g . , functional retention or loss ) of the aligned sequence segments over lineages ., Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome , CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection , and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover ., On both simulated and real Drosophila cis-regulatory modules , CSMET outperforms other state-of-the-art comparative genomic motif finders .","summary":"Functional turnover of transcription factor binding sites ( TFBSs ) , such as whole-motif loss or gain , are common events during genome evolution , and play a major role in shaping the genome and regulatory circuitry of contemporary species ., Conventional methods for searching non-conserved motifs across evolutionarily related species have little or no probabilistic machinery to explicitly model this important evolutionary process; therefore , they offer little insight into the mechanism and dynamics of TFBS turnover and have limited power in finding motif patterns shaped by such processes ., In this paper , we propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees , or CSMET , which uses a mathematically elegant and computationally efficient way to model biological sequence evolution at both nucleotide level at each individual site , and functional level of a whole TFBS ., CSMET offers the first principled way to take into consideration lineage-specific evolution of TFBSs and CRMs during motif detection , and offers a readily computable analytical form of the posterior distribution of motifs under TFBS turnover ., Its performance improves upon current state-of-the-art programs ., It represents an initial foray into the problem of statistical inference of functional evolution of TFBS , and offers a well-founded mathematical basis for the development of more realistic and informative models .","keywords":"computational biology\/evolutionary modeling, computational biology\/comparative sequence analysis, computational biology\/sequence motif analysis","toc":null} +{"Unnamed: 0":1752,"id":"journal.pcbi.1005724","year":2017,"title":"Quantifying the effects of antiangiogenic and chemotherapy drug combinations on drug delivery and treatment efficacy","sections":"The abnormal structure of tumor vasculature is one of the leading causes of insufficient and spatially heterogeneous drug delivery in solid tumors ., Tortuous and highly permeable tumor vessels along with the lack of a functional lymphatic system cause interstitial fluid pressure ( IFP ) to increase within tumors ., This elevated IFP results in the inefficient penetration of large drug particles into the tumor , whose primary transport mechanism is convection 1 , 2 ., The abnormalities in tumor vasculature are caused by dysregulation of angiogenesis ., Tumors initiate angiogenesis to form a vascular network that can provide oxygen and nutrients to sustain its rapid growth ., The production of VEGF , a growth factor that promotes angiogenesis , is triggered by the chronic hypoxic conditions that are prevalent in tumors ., Besides inducing angiogenesis , it leads to hyperpermeable blood vessels by enlarging pores and loosening the junctions between the endothelial cells that line the capillary wall 3 , 4 ., Subsequently , excessive fluid extravasation from these vessels results in a uniformly elevated IFP in the central region of tumor nearly reaching the levels of microvascular pressure ( MVP ) while at the tumor periphery , IFP falls to normal tissue levels 1 , 5 , 6 ., This common profile of IFP within tumors has been identified as a significant transport barrier to therapeutic agents and large molecules 1 , 7 ., When IFP approaches MVP , pressure gradients along vessels are diminished and blood flow stasis occurs , diminishing the functionality of existing vessels 8\u201310 ., Furthermore , uniformity of IFP in interior regions of tumors terminates the convection within tumor interstitium , hindering the transportation of large drugs 1 ., While the lack of a transvascular pressure gradient inhibits convective extravasation of drugs , sharp IFP gradient at tumor periphery creates an outward fluid flow from tumors that sweeps drugs away into normal tissues 1 ., Together these factors lead to the decreased drug exposure of tumor cells ., It has been revealed that the application of antiangiogenic agents can decrease vessel wall permeability and vessel density , transiently restoring some of the normal function and structure of abnormal tumor vessels 4 , 11 , 12 ., This process , which is called vascular normalization , is associated with a decrease in IFP and an increase in perfusion ., Therefore , this state of vasculature enables increased delivery of both drug and oxygen\/nutrients to the targeted tumor cells 11 , 13 ., Normalization enhances convection of drug particles from vessels into tumor interstitium by restoring transvascular pressure gradients through IFP reduction 11 , 14 , 15 ., It has shown some favorable results in preclinical and clinical trials regarding the enhancement of the delivery of large therapeutics such as nanoparticles 14 , 16 , 17 ., Since nanoparticles benefit from the enhanced permeability and retention effect ( EPR ) , they are distributed in higher amounts to tumors relative to normal tissue ., Accumulation of nanoparticles in normal tissues is relatively small compared to the standard small molecule chemotherapies , leading to decreased toxicity and side effects ., However , the main transport mechanism for large drugs is convection in tumor microenvironment ., Hence , when IFP is high , extravasation via convection is inhibited ., Normalization due to its ability to decrease IFP seems promising in drug delivery for large drugs with its potential of restoring convective transportation ., In both clinical and preclinical studies , it has been shown that antiangiogenic drugs demonstrate anti-tumor effects in various cancer types 18 ., However , rather than using antiangiogenic agents alone , studies reveal that the combination of these agents with chemotherapy drugs yields favorable results with increased therapeutic activity ., In some clinical studies 19\u201321 , bevacizumab combined with conventional chemotherapy has increased the survival and response rates among patients with gastrointestinal cancer compared to bevacizumab alone ., This finding that antiangiogenic therapy in combination with chemotherapy can improve the efficacy of treatment has been observed for patients with various cancers including non-small cell lung cancer 22 , 23 , breast cancer 24\u201326 and ovarian cancer 27 ., However , it is evident that there is a transient time window for vessel normalization 28 , 29 ., In order to improve drug delivery , chemotherapy should coincide with this transient state of improved vessel integrity ., Prolonged or excessive application of antiangiogenic agents can reduce microvascular density to the point that drug delivery is compromised 30 ., Therefore , dosing and scheduling of combined therapy with antiangiogenic agents must be carefully tailored to augment the delivery and response to chemotherapy 12 ., It is suggested that rather than uninterrupted application , intermittent cycles which can create re-normalization should be employed for antiangiogenic agent scheduling 31 ., Due to the complex and interdisciplinary nature of the subject , there is a considerable amount of computational efforts on tumor vascularization and its consequences for the tumor microenvironment and drug delivery ., Development of vasculature and intravascular flow dynamics are studied comprehensively 32\u201337 and in many studies chemotherapy is given through the discrete vessel system in order to calculate drug delivery to capillaries and tumor 33 , 34 , 37\u201339 ., Mathematical models have included transvascular and interstitial delivery of drugs 37\u201339 ., In addition to that , Wu et al . added tumor response to chemotherapy by applying nanoparticles and evaluating the decrease in tumor radius during chemotherapy for different microenvironmental conditions 39 ., There are also some studies about the optimization of combination therapy in tumors 40 ., In studies by the groups of Urszula Ledzewicz and Heinz Sch\u00e4ettler , changes in tumor volume after the administration of cytotoxic and antiangiogenic agents have been investigated by proposing a mathematical model and seeking optimal solutions for different treatment cases 41 , 42 ., Compartment models have also been used to explore how antiangiogenic agents may provide assistance to chemotherapy agents in reducing the volume of drug-resistant tumors and by using a bifurcation diagram it is shown that the co-administration of antiangiogenic and chemotherapy drugs can reduce tumor size more effectively compared to chemotherapy alone 43 ., Applications of chemotherapy drugs together with antiangiogenic agents have been studied by Panovska et al . to cut the supply of nutrients 44 ., Stephanou et al . showed that random pruning of vessels by anti-angogenic agents improves drug delivery by using 2-D and 3-D vessel networks 45 ., However , they did not associate this benefit with normalization of vasculature ., Jain and colleagues laid out the general groundwork for relations between vessel normalization and IFP by relating vessel properties and interstitial hydraulic conductivity to changes in pressure profile due to normalization 15 ., The subject is further investigated by Wu et al . by building a 3-D model of angiogenesis and adding intravascular flow to the computational framework 32 ., They observed slow blood flow within the tumors due to almost constant MVP and elevated IFP profile ., They show the coupling between intravascular and transvascular flux ., Kohandel et al . showed that normalization enhances tumor response to chemotherapy and identified the most beneficial scheduling for combined therapy in terms of tumor response 46 ., The size range of nanoparticles that could benefit from normalization has also been investigated 16 ., In this study , following the continuous mathematical model developed by Kohandel et al . 46 which couples tumor growth and vasculature , we built a framework for tumor dynamics and its microenvironment including IFP ., We use this system to evaluate the improvement in nanoparticle delivery resulting from vessel normalization ., As the tumor grows , a homogeneous distribution of vessels is altered by the addition of new leaky vessels to the system , representing angiogenesis ., As a consequence of angiogenesis and the absence of lymph vessels , IFP starts to build up inside the tumor inhibiting the fluid exchange between vessels and tumor and inhibiting nanoparticle delivery ., Simulations give the distribution of the nanoparticles in the tumor in a time-dependent manner as they exit the vessels and are transported through interstitium ., The activity of the drugs on tumor cells is determined according to the results of experimental trials by Sengupta et al . 47 ., We apply drugs in small doses given in subsequent bolus injections ., During drug therapy , both vessels and tumor respond dynamically ., After injections of antiangiogenic agents , a decrease in vessel density accompanies the changes in vessel transport parameters , initiating the normalized state ., Combining chemotherapy with applications of antiangiogenic agents , we are able to identify the benefits of a normalized state by observing the effects of different scheduling on IFP decrease , extravasation of drugs and tumor shrinkage ., We found that in adjuvant combination of drugs , IFP and vessel density decrease together resulting in an increase in the average extravasation of nanoparticles per unit area in the interior region of tumor ., In concurrent combination of drugs , IFP decrease is higher but vessel decrease is higher as well , creating a smaller enhancement in average extravasation per unit tumor area ., However , even though average extravasation is smaller in this case , we observe an increase in homogeneity in drug distribution ., Nanoparticles begin to extravasate even in the center of tumor through sparsely distributed vessels due to the sharp decrease in IFP ., Therefore normalization enabled the drugs to reach deeper regions of the tumor ., Following Kohandel et al . 46 , the Eqs ( 1 ) and ( 2 ) are used to model the spatio-temporal distribution of tumor cells and the heterogeneous tumor vasculature ., In Eq ( 1 ) , the first term models the diffusion of tumor cells , where Dn is the diffusion coefficient , and the second term describes the tumor growth rate , where nlim is the carrying capacity and r is the growth rate ., In the absence of the third and fourth terms , the Eq ( 1 ) has two fixed points: an unstable fixed point at n = 0 where there is no cell population and a stable fixed point at n = nlim where the population reaches its maximal density ., The coupling terms \u03b1mn n ( x , t ) m ( x , t ) and dr n ( x , t ) d ( x , t ) indicate the interactions of tumor cells with vasculature and chemotherapy drug , respectively ., Tumor cells proliferate at an increased rate \u03b1mn when they have vessels supplying them with nutrients and tumor cells are eliminated at rate dr if chemotherapy drug d ( x , t ) is present ., \u2202 n ( x , t ) \u2202 t = D n \u2207 2 n ( x , t ) + r n ( 1 - n n l i m ) + \u03b1 m n n ( x , t ) m ( x , t ) - d r n ( x , t ) d ( x , t ) ., ( 1 ) The tumor vasculature network exhibits abnormal dynamics with tortuous and highly permeable vessels which are structurally and functionally different from normal vasculature ., In order to create this heterogeneous structure , a coarse-grained model is used to produce islands of vessels ., In Eq 2 , the average blood vessel distribution is represented with m ( x , t ) and the equation is formulated to produce islands of vascularized space with the term m ( x , t ) ( \u03b1 + \u03b2m ( x , t ) + \u03b3m ( x , t ) 2 ) which has two stable points m = 1 and m = 0 corresponding to the presence and absence of vessels , respectively ., Representation of tumor-induced angiogenesis is modified in this model by recruiting the terms \u03b1nm n ( 1 \u2212 n\/nlim ) m and \u03b2nm\u2207 ., ( m\u2207n ) ., Here , the former attains positive values for tumor periphery due to the low cell density and in the central regions when cell density exceeds nlim , the term becomes negative creating a behavior which resembles to real tumors that has generally high vascularization in periphery and low vessel density in the center due to the growth-induced stresses 48 ., The latter term leads the vessels that are produced in the periphery towards the tumor core ., In this novel form , parameters relate to angiogenesis , \u03b2nm and \u03b1nm are changed as 0 . 5 and 0 . 25 , respectively ., Remaining set of the parameters related to tumor and vessel growth can be found in Kohandel et al . 46 ., Ar m ( x , t ) A ( x , t ) is the reaction of tumor vessels to antiangiogenic agent A ( x , t ) , which results in the elimination of vessels in the presence of antiangiogenic agent ., \u2202 m ( x , t ) \u2202 t = D m \u2207 2 m ( x , t ) + m ( x , t ) ( \u03b1 + \u03b2 m ( x , t ) + \u03b3 m ( x , t ) 2 ) + \u03b2 n m \u2207 \u00b7 ( m \u2207 n ) + \u03b1 n m n ( 1 - n n l i m ) m - A r m ( x , t ) A ( x , t ) ., ( 2 ) For the initial configuration of tumor cells , a Gaussian distribution is assumed while the initial vascular distribution is obtained by starting from a random , positively distributed initial condition of tumor vessels ., Darcy\u2019s law is used to describe the interstitial fluid flow within the tissue: u = \u2212K\u2207P , where K is the hydraulic conductivity of the interstitium ( mm2\/s\/mmHg ) and P is the interstitial fluid pressure ( IFP ) ., For the steady state fluid flow , the continuity equation is:, \u2207 \u00b7 u = \u0393 b - \u0393 \u2113 , ( 3 ), where \u0393b ( 1\/s ) represents the supply of the fluid from blood vessels into the interstitial space and \u0393\u2113 ( 1\/s ) represents the fluid drainage from the interstitial space into the lymph vessels ., Starling\u2019s law is used to determine the source and the sink terms:, \u0393 b = \u03bb b m ( x , t ) P v - P ( x , t ) - \u03c3 v ( \u03c0 c - \u03c0 i ) , ( 4 ) \u0393 \u2113 = \u03bb \u2113 P ( x , t ) ., ( 5 ) The parameters in these equations are the hydraulic conductivities of blood vessels \u03bbb and the lymphatics \u03bb\u2113 , the vascular pressure Pv , interstitial fluid pressure P and the osmotic reflection coefficient \u03c3v ., The capillary and the interstitial oncotic pressures are denoted by \u03c0c and \u03c0i , respectively ., Hydraulic conductivities of blood and lymph vessels are related to the hydraulic conductivity of vessel wall ( Lp ) and the vessel surface density ( S V ) with the relation \u03bb b , \u2113 = L p S V . The osmotic pressure contribution for the lymph vessels is neglected due to the highly permeable lymphatics ., Also , the pressure inside the lymphatics is taken to be 0 mm Hg 49 ., By substituting Darcy\u2019s law and Starling\u2019s law into the continuity equation , we obtain the equation for IFP in a solid tumor:, - K \u2207 2 P ( x , t ) = \u03bb b m ( x , t ) P v - P ( x , t ) - \u03c3 v ( \u03c0 c - \u03c0 i ) - \u03bb \u2113 P ( x , t ) ., ( 6 ) Pressure is initially taken to be the normal tissue value Pv and the initial pressure profile is set based on the solution of the above equation with the initial condition for tumor vasculature ., The boundary condition ensures that pressure reduces to the normal value Pv in host tissue ., For the transport of antiangiogenic agents A ( x , t ) , a diffusion equation is used:, \u2202 A ( x , t ) \u2202 t = D A \u2207 2 A ( x , t ) + \u03bb A m ( x , t ) ( A v - A ( x , t ) ) - \u0393 \u2113 A ( x , t ) - k A A ( x , t ) , ( 7 ), where DA is the diffusion coefficient of antiangiogenic agents in tissue , \u03bbA is the transvascular diffusion coefficient of antiangiogenic agents , Av is the plasma antiangiogenic agent concentration and kA is the decay rate of antiangiogenic agents ., The terms on the right hand side represent the diffusion of the antiangiogenic agents in the interstitium , diffusion through the vessels , the drainage of agents to the lymph vessels and the decay rate of the agents , respectively ., We consider liposomal delivery vehicles for chemotherapy drug with their concentration denoted by d ( x , t ) ., Since they are relatively large ( \u223c 100 nm ) , a convection-diffusion equation is used for the transport of these drug molecules:, \u2202 d ( x , t ) \u2202 t = D d \u2207 2 d ( x , t ) + \u2207 \u00b7 ( k E d ( x , t ) K \u2207 P ) + \u0393 b ( 1 - \u03c3 d ) d v - \u0393 \u2113 d ( x , t ) - d r d ( x , t ) n ( x , t ) - k d d ( x , t ) , ( 8 ), where Dd is the diffusion coefficient of drugs in the tissue , kE is the retardation coefficient for interstitial convection , dv is the plasma drug concentration , \u03c3d is the solvent drag reflection coefficient , dr is the rate of drug elimination as a result of reaction with tumor cells and kd is the decay rate of the drugs ., The terms on the right hand side represent the diffusion and the convection of the drugs in the interstitium , convection of the drugs through the vessels , the drainage of the drugs into the lymphatics , the consumption of drugs as a result of tumor cell interaction and the decay of the drug , respectively ., Diffusion of the drug from the blood vessels is assumed to be negligible since transvascular transport of large drugs is convection-dominated ., Since the time scale of the tumor growth is much larger than the time scale for the transport and distribution of the drug molecules , both antiangiogenic agent and chemotherapy drug equations are solved in steady state , i . e . \u2202 d ( x , t ) \u2202 t = \u2202 A ( x , t ) \u2202 t = 0 ., Both drugs are administered to the plasma with bolus injection in each administration through an exponential decay function:, A v ( t ) = A 0 e - t \/ t 1 \/ 2 A , ( 9 ) d v ( t ) = d 0 e - t \/ t 1 \/ 2 d , ( 10 ), In these equations , the terms A0 , d0 and t 1 \/ 2 A , t 1 \/ 2 d indicate the peak plasma concentration and the plasma half-lives of the antiangiogenic agent and chemotherapy drug , respectively ., No-flux boundary conditions are used for the antiangiogenic agent and the chemotherapy drug ., Parameters related to transport of interstitial fluid and transport of liposomes and antiangiogenic agents are listed in Tables 1 and 2 respectively ., Some of the effective parameters in the equations above dynamically change to mimic the changes in tumor and its microenvironment ., As the tumor grows , lymph vessels are diminished to ensure that there are no lymph vessels inside the tumor ., Without the presence of tumor , vessel density can increase up to a specific value ( the dimensionless value of 1 ) ., When vessel density is greater than 1 , it implies that they were produced by angiogenesis and leaky , thus their hydraulic conductivity is increased up to levels that is observed in tumors ., During antiangiogenic treatment , vessel density is decreased and when it decreases below 1 , normalization occurs and the hydraulic conductivity returns to normal tissue levels ., We started the simulations with a small tumor ( 0 . 2 mm radius ) and left it to grow for 30 days to an approximate radius of 13 . 5 mm ., Vessels which were initially set as randomly distributed islands in the computational domain evolved into a heterogeneous state throughout the simulations due to the presence of tumor cells ( Fig 1 , vessel density ) ., As the tumor grows , vessel islands become sparse in the interior region but their density increases by angiogenesis and they become leaky ., By the end of the simulation , the leakiness of tumor vessels and the lack of lymphatic drainage inside the tumor causes elevated pressure in the interior region of tumor very similar to that suggested in literature 1 , 5 ( Fig 1 , IFP\/Pe ) ., We experimented with various drug regimens ., To illustrate the improvement in drug delivery , we designed the cases given in Fig 2 ., Dimensionless dose values are fixed in order to replicate the treatment response observed in 47 ., Antiangiogenic treatment is adjusted such that at the end of administrations there is approximately a 50% decrease in MVD inside the tumor ., A fixed chemotherapy drug dose is administered on days 23 , 25 and 27 while we change the day of antiangiogenic agent administration starting from the days 15 , 17 , 19 , 21 and 23 , continue to give them every other day in 4 or 5 pulses ., We decrease the dose of antiangiogenic agents throughout the therapy because a better response in drug delivery is obtained with this way in our simulations ., We present here four cases where only antiangiogenic agent administration starts on day 23 , only chemotherapy drug on day 23 , neoadjuvant therapy with antiangiogenic agents on day 19 and chemotherapy drug on day 23 and finally concurrent therapy with both of drugs starting on day 23 ., The most beneficial results regarding the amounts of drugs extravasate in the interior parts of the tumor are yielded when the antiangiogenic treatment starts at day 19 ( case-3 in Fig 2 ) ., As expected , antiangiogenic agents don\u2019t have a profound effect on tumor cell density when they are applied alone ( Fig 3 , case-1 ) ., In all cases , we observed greater drug extravasation near the tumor rim due to decreasing IFP in that region ( Fig 3 ) ., It can be seen that fluid flow from vessels to the tumor is poor in the interior region for case-2 , but it starts to enhance in the same region in case-3 and case-4 ., The main reason for this change is the introduction of a pressure gradient in the tumor center restoring drug convection ., Therefore , in both case-3 and case-4 , tumor cell density is decreased in the interior region ( Figs 3 and 4b ) as a consequence of increased drug extravasation in the interior region of the tumor ., We calculate the space average of cell density and IFP in each time step ., Average cell density is calculated as, \u222b \u222b A i n t n ( x , y , t ) d x d y \u222b \u222b A i n t d x d y ( 11 ), over area Aint whose boundary is set by the condition n ( x , y , t ) > 1 which represents the interior region of tumor ( corresponds to r < 6 mm for a tumor of radius 10mm ) ., Average IFP is calculated as, \u222b \u222b A P ( x , y , t ) d x d y \u222b \u222b A d x d y ( 12 ), over area A whose boundary is set by the condition n ( x , y , t ) > 0 . 1 which represents the value over whole tumor ., When we evaluate average pressure over the entire area of the tumor , we observe a synergistic effect in reducing pressure arising from the combined application of antiangiogenic agent and chemotherapy which can be seen in Fig 4a , especially for case-4 ., This synergistic effect also exhibits itself in tumor cell density in a less pronounced manner that can be observed from Fig 4b ., This indicates improved combination treatment efficacy as an indirect result of decreasing IFP ., According to our results , drug extravasation from vessels in the interior region of the tumor is nearly doubled for combination cases ( Fig 5a , case-3 and case-4 compared to case-2 ) ., However , this improvement is not directly reflected on drug exposure due to reduced vessel density by antiangiogenic agents ., Total drug exposure of unit area in tumor during treatment only improves approximately 20\u201325% ., IFP during the applications of chemotherapy drug was the lowest for concurrent therapy ( case-4 ) ., However , regarding tumor regression adjuvant therapy ( case-3 ) performed better , agreeing with the results of Kohandel et al . 46 ., Even though decrease in vessel density and leakiness cuts off the supply of drugs , the decrease in IFP appearing for the same reasons seems to compensate in the interior region of tumor , resulting in better drug extravasation ., When two drugs are given closer temporally , the resulting IFP decrease is maximized ., This enables the convective extravasation of nanoparticles deep into tumors to places that are not exposed to drugs without combination therapy ., In order to evaluate the effect of chemotherapy drugs that target tumor cell proliferation , we modified Eq 1 such that the chemotherapy drugs would directly act on tumor growth ., The terms responsible for tumor growth ( 2nd and 3rd terms in the right-hand side of Eq 1 ) are multiplied by ( 1 \u2212 d ( x , t ) \/dmax ) where dmax is maximum drug concentration that extravasated inside the tumor ., In this scenario , small changes are seen in tumor cell densities between combination therapy and chemotherapy alone ., However , we observe that in this form , extravasation of drugs is also increased in the central region as seen in Fig 6 , implying that normalization is also beneficial in this scenario ., Using a mathematical model , we assess whether antiangiogenic therapy could increase liposome delivery due to normalization of tumor vessels ., In order to do that , we first created a dynamic vessel structure that exhibits properties of tumor vessels created by angiogenesis as well as inherent vessels in the tissue ., As the tumor grows , vessels in the central region begin to disappear due to increased tumor cell density in that region ., Angiogenesis occurs in the tumor creating additional leaky vessels ., The emergent vessel density is consistent with that observed in 59 , with decreasing density towards the tumor center along with randomly appearing clusters of vessels ., IFP is found to be elevated throughout the tumor up to the levels of MVP and decreases sharply around the tumor rim as it is observed in various studies in the literature ., 1 , 5 , 6 ., We apply antiangiogenic agents in various regimens combined with chemotherapy and focus on large drugs ( liposomes ) whose delivery mainly depends on convection ., As a result of the decrease in vessel density and leakiness due to the antiangiogenic activity , we expect a decrease in pressure which brings about a higher pressure difference between tumor and vessels ., Transvascular convection depends on this pressure difference , hydraulic conductivity and density of vessels at the unit area ., Since antiangiogenic agents decrease hydraulic conductivity ( i . e . leakiness ) and vessel density , by cutting the supply of drugs , the resulting increase in pressure difference should compensate for these effects , restoring extravasation in remaining vessels ., In all simulations , liposome extravasation predominantly occurs in the tumor periphery due to low IFP levels , hence drugs preferentially accumulate in this area ., Our result has been confirmed by experimental studies of drug distribution using large drugs such as micelles 60 , 61 , nanoprobes 62 and liposomes 59 , 63\u201366 in which peripheral accumulation is observed ., As the application time between antiangiogenic agents and liposomes becomes shorter , the resulting decrease in IFP is maximized ., This enables the convective extravasation of nanoparticles deep into tumors to places that could not previously be exposed to drugs before and liposome extravasation begins to appear in central region ., However , that does not bring about maximum accumulation of liposomes consistently at all times ., There is a trade-off between total drug accumulation and how deeply drug can penetrate inside the tumor ., In our study , we find a balance between these two situations ., It also shows us that IFP and drug accumulation are not always correlated , rather the maximum accumulation is achieved through the complex interplay between IFP , vessel density and leakiness ., Current research by 63 also supports this view; in their mouse study , they point out that IFP is correlated with perfusion , perfusion is correlated with accumulation and the relationship between IFP and liposome accumulation is limited ., In another significant study , tumor-bearing animals are subjected to combination therapy with liposomes and the antiangiogenic agent pazopanib in order to evaluate the effect of normalization via imaging drug distribution 65 ., As a result of the decrease in MVP , they also observed a resulting decrease in IFP ., Similar to our results , IFP is not the determinant of drug accumulation in their work ., They have found that decreased leakiness of vessels inhibits delivery even though there is an IFP decrease as a result of antiangiogenic therapy ., They have collected data for a single time point and observed a decrease in doxil penetration in combination therapy ., They also point out that functional measures of normalization may not occur simultaneously which is also the case for our study ., Throughout the combination therapy , we also observe periods where drug extravasation is limited and others where drug extravasation is improved ., They have found the vessel permeability as a limiting factor in their study , however MVD 67 and tumor blood flow and blood volume 68 are also determinants of large drug accumulation ., This shows that these measures of normalization are tumor type dependent and even within the same tumor they are dynamic which leads to variation in drug distribution ., Among many different schedules , most of our trials did not show improvement in drug accumulation ., We see that the dose of antiangiogenic agents should be carefully determined to ensure any delivery benefit ., As stated by 30 , when we apply a large dose of antiangiogenic agents , significant IFP decrease is observed but the decrease in vessel permeability and the lack of vessel density lead to impaired liposome extravasation ., At the other extreme , when we give small amounts of ant-angiogenic agents , it is seen that IFP decrease is not enough to make a significant improvement to liposome extravasation ., In this model , intravascular flow is approximated as uniform to focus on the effects of transvascular delivery benefit of normalization ., Due to abnormal vasculature , tumors are known to have impaired blood perfusion 69 due to simultaneous presence of functional and non-functional vessels ., In this work , we simulate structural normalization of vessels without considering functional normalization which is associated with intravascular flow and results in increased perfusion 30 ., Vessels within the tumor in this model have uniform functionality in terms of supplying blood flow ., Hence , by decreasing vessel density in microenvironment due to antiangiogenic activity , we are decreasing blood perfusion ., However , on the contrary , normalization is expected to enhance intravascular flow by decreasing pore size which restores intravascular pressure gradients and pruning non-functional vessels that interrupt circulation ., Therefore , normalization brings about improved blood perfusion whereas here we decrease perfusion and improve the delivery only through improved convective extravasation by decreased IFP ., In our simulations , the delivery benefit is underestimated since we decrease blood perfusion as a part of antiangiogenic activity ., In 65 , they observed that MVD decrease did not change liposome accumulation because the eliminated vessels are the ones that are thought to be nonfunctional ., In our previous study , we constructed a spherical tumor with uniform vessel density to investigate the benefit from normalization therapy and the results showed increased delivery in the interior regions of tumors of certain sizes 70 ., In animal studies , it has been shown that the bulk accumulation of liposomes is not representative of efficacy since it is not informative about the drug accumulation within specific regions of tumors 67 , 71 and heterogeneous drug accumulation may result in tumor repopulation 72 ., Therefore , it is important to understand the factors that yield heterogeneous accumulation and strive to avoid them to generate effective treatments ., According to our results , it is plausible that administering targeted therapies using large drugs , normalization should be more useful since it can provide a simultaneous access to both tumor rim and center ., The dose of chemotherapy should be increased in order to ensure similar drug exposure despite the sparser vessel density caused by antiangiogenic activity ., This is the reason why targeted therapies are more suitable to seize the benefits from normalization , as they can be applied in greater doses without harming healthy tissue ., When convective extravasation is restored in the central region , drugs can immediately reach to tumor center and increase the probability of treatment success and tumor eradication .","headings":"Introduction, Methods, Results, Discussion","abstract":"Tumor-induced angiogenesis leads to the development of leaky tumor vessels devoid of structural and morphological integrity ., Due to angiogenesis , elevated interstitial fluid pressure ( IFP ) and low blood perfusion emerge as common properties of the tumor microenvironment that act as barriers for drug delivery ., In order to overcome these barriers , normalization of vasculature is considered to be a viable option ., However , insight is needed into the phenomenon of normalization and in which conditions it can realize its promise ., In order to explore the effect of microenvironmental conditions and drug scheduling on normalization benefit , we build a mathematical model that incorporates tumor growth , angiogenesis and IFP ., We administer various theoretical combinations of antiangiogenic agents and cytotoxic nanoparticles through heterogeneous vasculature that displays a similar morphology to tumor vasculature ., We observe differences in drug extravasation that depend on the scheduling of combined therapy; for concurrent therapy , total drug extravasation is increased but in adjuvant therapy , drugs can penetrate into deeper regions of tumor .","summary":"Tumor vessels being very different from their normal counterparts are leaky and lack organization that sustains blood circulation ., As a result , insufficient blood supply and high fluid pressure begin to appear inside the tumor which leads to a reduced delivery of drugs within the tumor , especially in tumor center ., A treatment strategy that utilizes anti-vascular drugs is observed to revert these alterations in tumor vessels , making them more normal ., This approach is suggested to improve drug delivery by enhancing physical transport of drugs ., In this paper , we build a mathematical model to simulate tumor and vessel growth as well as fluid pressure inside the tumor ., This framework enables us to simulate drug treatment scenarios on tumors ., We use this model to find whether the delivery of the chemotherapy drugs is enhanced by application of anti-vascular drugs by making vessels more normal ., Our simulations show that anti-vascular drug not only enhances the amount of drugs that is released into tumor tissue , but also enhances drug distribution enabling drug release in the central regions of tumor .","keywords":"medicine and health sciences, vesicles, cardiovascular physiology, engineering and technology, cancer treatment, clinical oncology, drugs, chemotherapeutic agents, oncology, angiogenesis, developmental biology, clinical medicine, pharmaceutics, nanoparticles, nanotechnology, oncology agents, pharmacology, cellular structures and organelles, cancer chemotherapy, liposomes, drug delivery, chemotherapy, cell biology, physiology, biology and life sciences, drug therapy, combination chemotherapy","toc":null} +{"Unnamed: 0":1292,"id":"journal.pcbi.1000929","year":2010,"title":"Instantaneous Non-Linear Processing by Pulse-Coupled Threshold Units","sections":"Understanding the dynamics of single neurons , recurrent networks of neurons , and spike-timing dependent synaptic plasticity requires the quantification of how a single neuron transfers synaptic input into outgoing spiking activity ., If the incoming activity has a slowly varying or constant rate , the membrane potential distribution of the neuron is quasi stationary and its steady state properties characterize how the input is mapped to the output rate ., For fast transients in the input , time-dependent neural dynamics gains importance ., The integrate-and-fire neuron model 1 can efficiently be simulated 2 , 3 and well approximates the properties of mammalian neurons 4\u20136 and more detailed models 7 ., It captures the gross features of neural dynamics: The membrane potential is driven by synaptic impulses , each of which causes a small deflection that in the absence of further input relaxes back to a resting level ., If the potential reaches a threshold , the neuron emits an action potential and the membrane potential is reset , mimicking the after-hyperpolarization ., The analytical treatment of the threshold process is hampered by the pulsed nature of the input ., A frequently applied approximation treats synaptic inputs in the diffusion limit , in which postsynaptic potentials are vanishingly small while their rate of arrival is high ., In this limit , the summed input can be replaced by a Gaussian white noise current , which enables the application of Fokker-Planck theory 8 , 9 ., For this approximation the stationary membrane potential distribution and the firing rate are known exactly 8 , 10 , 11 ., The important effect of synaptic filtering has been studied in this limit as well; modelling synaptic currents as low-pass filtered Gaussian white noise with non-vanishing temporal correlations 12\u201315 ., Again , these results are strictly valid only if the synaptic amplitudes tend to zero and their rate of arrival goes to infinity ., For finite incoming synaptic events which are excitatory only , the steady state solution can still be obtained analytically 16 , 17 and also the transient solution can efficiently be obtained by numerical solution of a population equation 18 ., A different approach takes into account non-zero synaptic amplitudes to first calculate the free membrane potential distribution and then obtain the firing rate by solving the first passage time problem numerically 19 ., This approach may be extendable to conductance based synapses 20 ., Exact results for the steady state have so far only been presented for the case of exponentially distributed synaptic amplitudes 21 ., The spike threshold renders the model an extremely non-linear unit ., However , if the synaptic input signal under consideration is small compared to the total synaptic barrage , a linear approximation captures the main characteristics of the evoked response ., In this scenario all remaining inputs to the neuron are treated as background noise ( see Figure 1A ) ., Calculations of the linear response kernel in the diffusion limit suggested that the integrate-and-fire model acts as a low-pass filter 22 ., Here spectrum and amplitude of the synaptic background input are decisive for the transient properties of the integrate-and-fire model: in contrast to white noise , low-pass filtered synaptic noise leads to a fast response in the conserved linear term 12 ., Linear response theory predicts an optimal level of noise that promotes the response 23 ., In the framework of spike-response models , an immediate response depending on the temporal derivative of the postsynaptic potential has been demonstrated in the regime of low background noise 24 ., The maximization of the input-output correlation at a finite amplitude of additional noise is called stochastic resonance and has been found experimentally in mechanoreceptors of crayfish 25 , in the cercal sensory system of crickets 26 , and in human muscle spindles 27 ., The relevance and diversity of stochastic resonance in neurobiology was recently highlighted in a review article 28 ., Linear response theory enables the characterization of the recurrent dynamics in random networks by a phase diagram 22 , 29 ., It also yields approximations for the transmission of correlated activity by pairs of neurons in feed-forward networks 30 , 31 ., Furthermore , spike-timing dependent synaptic plasticity is sensitive to correlations between the incoming synaptic spike train and the firing of the neuron ( see Figure 1 ) , captured up to first order by the linear response kernel 32\u201338 ., For neuron models with non-linear membrane potential dynamics , the linear response properties 39 , 40 and the time-dependent dynamics can be obtained numerically 41 ., Afferent synchronized activity , as it occurs e . g . in primary sensory cortex 42 , easily drives a neuron beyond the range of validity of the linear response ., In order to understand transmission of correlated activity , the response of a neuron to fast transients with a multiple of a single synaptic amplitude 43 hence needs to be quantified ., In simulations of neuron models with realistic amplitudes for the postsynaptic potentials , we observed a systematic deviation of the output spike rate and the membrane potential distribution from the predictions by the Fokker-Planck theory modeling synaptic currents by Gaussian white noise ., We excluded any artifacts of the numerics by employing a dedicated high accuracy integration algorithm 44 , 45 ., The novel theory developed here explains these observations and lead us to the discovery of a new early component in the response of the neuron model which linear response theory fails to predict ., In order to quantify our observations , we extend the existing Fokker-Planck theory 46 and hereby obtain the mean time at which the membrane potential first reaches the threshold; the mean first-passage time ., The advantage of the Fokker-Planck approach over alternative techniques has been demonstrated 47 ., For non-Gaussian noise , however , the treatment of appropriate boundary conditions for the membrane potential distribution is of utmost importance 48 ., In the results section we develop the Fokker-Planck formalism to treat an absorbing boundary ( the spiking threshold ) in the presence of non-zero jumps ( postsynaptic potentials ) ., For the special case of simulated systems propagated in time steps , an analog theory has recently been published by the same authors 49 , which allows to assess artifacts introduced by time-discretization ., Our theory applied to the integrate-and-fire model with small but finite synaptic amplitudes 1 , introduced in section \u201cThe leaky integrate-and-fire model\u201d , quantitatively explains the deviations of the classical theory for Gaussian white noise input ., After reviewing the diffusion approximation of a general first order stochastic differential equation we derive a novel boundary condition in section \u201cDiffusion with finite increments and absorbing boundary\u201d ., We then demonstrate in section \u201cApplication to the leaky integrate-and-fire neuron\u201d how the steady state properties of the model are influenced: the density just below threshold is increased and the firing rate is reduced , correcting the preexisting mean first-passage time solution 10 for the case of finite jumps ., Turning to the dynamic properties , in section \u201cResponse to fast transients\u201d we investigate the consequences for transient responses of the firing rate to a synaptic impulse ., We find an instantaneous , non-linear response that is not captured by linear perturbation theory in the diffusion limit and that displays marked stochastic resonance ., On the network level , we demonstrate in section \u201cDominance of the non-linear component on the network level\u201d that the non-linear fast response becomes the most important component in case of feed-forward inhibition ., In the discussion we consider the limitations of our approach , mention possible extensions and speculate about implications for neural processing and learning ., Consider a leaky integrate-and-fire model 1 with membrane time constant and resistance receiving excitatory and inhibitory synaptic inputs , as they occur in balanced neural networks 50 ., We aim to obtain the mean firing rate and the steady state membrane potential distribution ., The input current is modeled by point events , drawn from homogeneous Poisson processes with rates and , respectively ., The membrane potential is governed by the differential equation ., An excitatory spike causes a jump of the membrane potential by , an inhibitory spike by , so , where is a constant background current ., Whenever reaches the threshold , the neuron emits a spike and the membrane potential is reset to , where it remains clamped for the absolute refractory time ., The approach we take is to modify the existing Fokker-Planck theory in order to capture the major effects of the finite jumps ., To this end , we derive a novel boundary condition at the firing threshold for the steady state membrane potential distribution of the neuron ., We then solve the Fokker-Planck equation obtained from the standard diffusion approximation 8 , 10 , 11 , 22 , 23 given this new condition ., The membrane potential of the model neuron follows a first order stochastic differential equation ., Therefore , in this section we consider a general first order stochastic differential equation driven by point events ., In order to distinguish the dimensionless quantities in this section from their counterparts in the leaky integrate-and-fire model , we denote the rates of the two incoming Poisson processes by ( excitation ) and ( inhibition ) ., Each incoming event causes a finite jump ( the excitatory synaptic weight ) for an increasing event and ( the inhibitory synaptic weight ) for a decreasing event ., The stochastic differential equation takes the form ( 1 ) where captures the deterministic time evolution of the system ( with for the leaky integrate-and-fire neuron ) ., We follow the notation in 46 and employ the Kramers-Moyal expansion with the infinitesimal moments ., The first and second infinitesimal moment evaluate to and , where we introduced the shorthand and ., The time evolution of the probability density is then governed by the Kramers-Moyal expansion , which we truncate after the second term to obtain the Fokker-Planck equation ( 2 ) where denotes the probability flux operator ., In the presence of an absorbing boundary at , we need to determine the resulting boundary condition for the stationary solution of ( 2 ) ., Without loss of generality , we assume the absorbing boundary at to be the right end of the domain ., A stationary solution exists , if the probability flux exiting at the absorbing boundary is reinserted into the system ., For the example of an integrate-and-fire neuron , reinsertion takes place due to resetting the neuron to the same potential after each threshold crossing ., This implies a constant flux through the system between the point of insertion and threshold ., Rescaling the density by this flux as results in the stationary Focker-Planck equation , which is a linear inhomogeneous differential equation of first order ( 3 ) with ., First we consider the diffusion limit , in which the rate of incoming events diverges , while the amplitude of jumps goes to zero , such that mean and fluctuations remain constant ., In this limit , the Kramers-Moyal expansion truncated after the second term becomes exact 51 ., This route has been taken before by several authors 8 , 22 , 23 , here we review these results to consistently present our extension of the theory ., In the above limit equation ( 3 ) needs to be solved with the boundary conditionsMoreover , a finite probability flux demands the density to be a continuous function , because of the derivative in the flux operator ., In particular , the solution must be continuous at the point of flux insertion ( however , the first derivative is non-continuous at due to the step function in the right hand side of ( 3 ) ) ., Continuity especially implies a vanishing density at threshold ., Once the solution of ( 3 ) is found , the normalization condition determines the stationary flux ., Now we return to the problem of finite jumps ., We proceed along the same lines as in the diffusion limit , seeking the stationary solution of the Fokker-Planck equation ( 2 ) ., We keep the boundary conditions at and at as well as the normalization condition as before , but we need to find a new self-consistent condition at threshold , because the density does not necessarily have to vanish if the rate of incoming jumps is finite ., The main assumption of our work is that the steady state solution satisfies the stationary Fokker-Planck equation ( 3 ) based on the diffusion approximation within the interval , but not necessarily at the absorbing boundary , where the solution might be non-continuous ., To obtain the boundary condition , we note that the flux over the threshold has two contributions , the deterministic drift and the positive stochastic jumps crossing the boundary ( 4 ) ( 5 ) with ., To evaluate the integral in ( 5 ) , for small we expand into a Taylor series around ., This is where our main assumption enters: we assume that the stationary Fokker-Planck equation ( 3 ) for is a sufficiently accurate characterization of the jump diffusion process ., We solve this equation for It is easy to see by induction , that the function and all its higher derivatives , can be written in the form , whose coefficients for obey the recurrence relation ( 6 ) with the additional values and , as denotes the function itself ., Inserting the Taylor series into ( 5 ) and performing the integration results in ( 7 ) which is the probability mass moved across threshold by a perturbation of size and hence also quantifies the instantaneous response of the system ., After dividing ( 4 ) by we solve for to obtain the Dirichlet boundary condition ( 8 ) If is small compared to the length scale on which the probability density function varies , the probability density near the threshold is well approximated by a Taylor polynomial of low degree; throughout this work , we truncate ( 7 ) and ( 12 ) at ., The boundary condition ( 8 ) is consistent with in the diffusion limit , in which the rate of incoming jumps diverges , while their amplitude goes to zero , such that the first ( ) and second moment ( ) stay finite ., This can be seen by scaling , , with such that the mean is kept constant 51 ., Inserting this limit in ( 8 ) , we find ( 9 ) since , and vanishes for , is bounded and ., The general solution of the stationary Fokker-Planck equation ( 3 ) is a sum of a homogeneous solution that satisfies and a particular solution with ., The homogeneous solution is , where we fixed the integration constant by chosing ., The particular solution can be obtained by variation of constants and we chose it to vanish at the threshold as ., The complete solution is a linear combination , where the prefactor is determined by the boundary condition ( 8 ) in the case of finite jumps , or by for Gaussian white noise The normalization condition determines the as yet unknown constant probability flux through the system ., We now apply the theory developed in the previous section to the leaky integrate-and-fire neuron with finite postsynaptic potentials ., Due to synaptic impulses , the membrane potential drifts towards and fluctuates with the diffusion constant ., This suggests to choose the natural units for the time and for the voltage to obtain the simple expressions for the drift- and for the diffusion-term in the Fokker-Planck operator ( 2 ) ., The probability flux operator ( 2 ) is then given as ., In the same units the stationary probability density scaled by the flux reads where is the flux corresponding to the firing rate in units of ., As is already scaled by the flux , application of the flux operator yields unity between reset and threshold and zero outside ( 10 ) The steady state solution of this stationary Fokker-Planck equation ( 11 ) is a linear superposition of the homogeneous solution and the particular solution ., The latter is chosen to be continuous at and to vanish at ., Using the recurrence ( 6 ) for the coeffcients of the Taylor expansion of the membrane potential density , we obtain and , where starts from ., The first important result of this section is the boundary value of the density at the threshold following from ( 8 ) as ( 12 ) The constant in ( 11 ) follows from ., The second result is the steady state firing rate of the neuron ., With being the fraction of neurons which are currently refractory , we obtain the rate from the normalization condition of the density as ( 13 ) The normalized steady state solution Figure 2A therefore has the complete form ( 14 ) Figure 2B , D shows the steady state solution near the threshold obtained by direct simulation to agree much better with our analytical approximation than with the theory for Gaussian white noise input ., Even for synaptic amplitudes ( here ) which are considerably smaller than the noise fluctuations ( here ) , the effect is still well visible ., The oscillatory deviations with periodicity close to reset observable in Figure 2A are due to the higher occupation probability of voltages that are integer multiples of a synaptic jump away from reset ., The modulation washes out due to coupling of adjacent voltages by the deterministic drift as one moves away from reset ., The oscillations at lower frequencies apparent in Figure 2A are due to aliasing caused by the finite bin width of the histogram ( ) ., The synaptic weight is typically small compared to the length scale on which the probability density function varies ., So the probability density near the threshold is well approximated by a Taylor polynomial of low degree; throughout this work , we truncate the series in ( 12 ) at ., A comparison of this approximation to the full solution is shown in Figure 2E ., For small synaptic amplitudes ( shown ) , below threshold and outside the reset region ( Figure 2A , C ) the approximation agrees with the simulation within its fluctuation ., At the threshold ( Figure 2B , D ) our analytical solution assumes a finite value whereas the direct simulation only drops to zero on a very short voltage scale on the order of the synaptic amplitude ., For larger synaptic weights ( , see Figure 2F ) , the density obtained from direct simulation exhibits a modulation on the corresponding scale ., The reason is the rectifying nature of the absorbing boundary: A positive fluctuation easily leads to a threshold crossing and absorption of the state in contrast to negative fluctuations ., Effectively , this results in a net drift to lower voltages within the width of the jump distribution caused by synaptic input , visible as the depletion of density directly below the threshold and an accumulation further away , as observed in Figure 2F ., The second term ( proportional to ) appearing in ( 13 ) is a correction to the well known firing rate equation of the integrate-and-fire model driven by Gaussian white noise 10 ., Figure 3 compares the firing rate predicted by the new theory to direct simulation and to the classical theory ., The classical theory consistently overestimates the firing rate , while our theory yields better accuracy ., Our correction resulting from the new boundary condition becomes visible at moderate firing rates when the density slightly below threshold is sufficiently high ., At low mean firing rates , the truncation of the Kramers-Moyal expansion employed in the Fokker-Planck description may contribute comparably to the error ., Our approximation captures the dependence on the synaptic amplitude correctly for synaptic amplitudes of up to ( Figure 3B ) ., The insets in Figure 3C , D show the relative error of the firing rate as a function of the noise amplitude ., As expected , the error increases with the ratio of the, synaptic effect compared to the amplitude of the noise fluctuations ., For low noise , our theory reduces the relative error by a factor of compared to the classical diffusion approximation ., We now proceed to obtain the response of the firing rate to an additional -shaped input current ., Such a current can be due to a single synaptic event or due to the synchronized arrival of several synaptic pulses ., In the latter case , the effective amplitude of the summed inputs can easily exceed that of a single synapse ., The fast current transient causes a jump of the membrane potential at and ( 2 ) suggests to treat the incident as a time dependent perturbation of the mean input ., First , we are interested in the integral response of the excess firing rate ., Since the perturbation has a flat spectrum , up to linear order in the spectrum of the excess rate is , where is the linear transfer function with respect to perturbing at Laplace frequency ., In particular , ., As is the DC susceptibility of the system , we can express it up to linear order as ., Hence , ( 15 ) We also take into account the dependence of on to calculate from ( 13 ) and obtain ( 16 ) Figure 4D shows the integral response to be in good agreement with the linear approximation ., This expression is consistent with the result in the diffusion limit : Here the last term becomes , where we used , following from ( 10 ) with ., This results in , which can equivalently be obtained directly as the derivative of ( 13 ) with respect to setting ., Taking the limit , however , does not change significantly the integral response compared to the case of finite synaptic amplitudes ( Figure 4D , Figure 5A ) ., The instantaneous response of the firing rate to an impulse-like perturbation can be quantified without further approximation ., The perturbation shifts the probability density by so that neurons with immediately fire ., This results in the finite firing probability of the single neuron within infinitesimal time ( 5 ) , which is zero for ., This instantaneous response has several interesting properties: For small it can be approximated in terms of the value and the slope of the membrane potential distribution below the threshold ( using ( 7 ) for ) , so it has a linear and a quadratic contribution in ., Figure 4A shows a typical response of the firing rate to a perturbation ., The peak value for a positive perturbation agrees well with the analytical approximation ( 7 ) ( Figure 4C ) ., Even in the diffusion limit , replacing the background input by Gaussian white noise , the instantaneous response persists ., Using the boundary condition our theory is applicable to this case as well ., Since the density just below threshold is reduced , ( 5 ) yields a smaller instantaneous response ( Figure 4C , Figure 5B ) which for positive still exhibits a quadratic , but no linear , dependence ., The increasing and convex dependence of the response probability on the amplitude of the perturbation is a generic feature of neurons with subthreshold mean input that also persists in the case of finite synaptic rise time ., In this regime , the membrane potential distribution has a mono-modal shape centered around the mean input , which is inherited from the underlying superposition of a large number of small synaptic impulses ., The decay of the density towards the threshold is further enhanced by the probability flux over the threshold: a positive synaptic fluctuation easily leads to the emission of a spike and therefore to the absorption of the state at the threshold , depleting the density there ., Consequently , the response probability of the neuron is increasing and convex as long as the peak amplitude of the postsynaptic potential is smaller than the distance of the peak of the density to the threshold ., It is increasing and concave beyond this point ., At present the integrate-and-fire model is the simplest analytically tractable model with this feature ., The integral response ( 15 ) as well as the instantaneous response ( 5 ) both exhibit stochastic resonance; an optimal level of synaptic background noise enhances the transient ., Figure 5A shows this noise level to be at about for the integral response ., The responses to positive and negative perturbations are symmetric and the maximum is relatively broad ., The instantaneous response in Figure 5B displays a pronounced peak at a similar value of ., This non-linear response only exists for positive perturbations; the response is zero for negative ones ., Though the amplitude is reduced in the case of Gaussian white noise background , the behavior is qualitatively the same as for noise with finite jumps ., Stochastic resonance has been reported for the linear response to sinusoidal periodic stimulation 23 ., Also for non-periodic signals that are slow compared to the neurons dynamics an adiabatic approximation reveals stochastic resonance 52 ., In contrast to the latter study , the rate transient observed in our work is the instantaneous response to a fast ( Dirac ) synaptic current ., Due to the convex nature of the instantaneous response ( Figure 4C ) its relative contribution to the integral response increases with ., For realistic synaptic weights the contribution reaches percent ., An example network in which the linear non-instantaneous response cancels completely and the instantaneous response becomes dominant is shown in Figure 6A ., At two populations of neurons simultaneously receive a perturbation of size and respectively ., This activity may , for example , originate from a third pool of synchronous excitatory and inhibitory neurons ., It may thus be interpreted as feed-forward inhibition ., The linear contributions to the pooled firing rate response of the former two populations hence is zero ., The instantaneous response , however , causes a very brief overshoot at ( Figure 6B ) ., Figure 6C reveals that the response returns to baseline within ., Figure 6D shows that the dependence of peak height on still exhibits the supra-linearity ., The quite exact cancellation of the response for originates from the symmetry of the response functions for positive and negative perturbations in this interval ( shown in Figure 4A , B ) ., The pooled firing rate of the network is the sum of the full responses: the instantaneous response at does not share the symmetry and hence does not cancel ., This demonstrates that the result of linear perturbation theory is a good approximation for and that the instantaneous response at the single time point completes the characterization of the neuronal response ., In this work we investigate the effect of small , but non-zero synaptic impulses on the steady state and response properties of the integrate-and-fire neuron model ., We obtain a more accurate description of the firing rate and the membrane potential distribution in the steady state than provided by the classical approximation of Gaussian white noise input currents 10 ., Technically this is achieved by a novel hybrid approach combining a diffusive description of the membrane potential dynamics far away from the spiking threshold with an explicit treatment of threshold crossings by synaptic transients ., This allows us to obtain a boundary condition for the membrane potential density at threshold that captures the observed elevation of density ., Our work demonstrates that in addition to synaptic filtering , the granularity of the noise due to finite non-zero amplitudes does affect the steady state and the transient response properties of the neuron ., Here , we study the effect of granularity using the example of a simple neuron model with only one dynamic variable ., The quantitatively similar increase of the density close to threshold observed if low-pass filtered Gaussian white noise is used as a model for the synaptic current has a different origin ., It is due to the absence of a diffusion term in the dynamics of the membrane potential 12 , 13 , 15 ., The analytical treatment of finite synaptic amplitudes further allows us to characterize the probability of spike emission in response to synaptic inputs for neuron models with a single dynamical variable and renewal ., Alternatively , this response can be obtained numerically from population descriptions 18 , 39\u201341 or , for models with one or more dynamic variables and gradually changing inputs , in the framework of the refractory density approximation 15 ., Here , we find that the response can be decomposed into a fast , non-linear and a slow linear contribution , as observed experimentally about a quarter of a century ago 53 in motor neurons of cat cortex in the presence of background noise ., The existence of a fast contribution proportional to the temporal change of the membrane potential was predicted theoretically 54 ., In the framework of the refractory density approach 15 , the effective hazard function of an integrate-and-fire neuron also exhibits contributions to spike emission due to two distinct causes: the diffusive flow through the threshold and the movement of density towards the threshold ., The latter contribution is proportional to the temporal change of the membrane potential and is corresponding to the instantaneous response reported here , but for the case of a gradually increasing membrane potential ., Contemporary theory of recurrent networks so far has neglected the transient non-linear component of the neural response , an experimentally observed feature 53 that is generic to threshold units in the presence of noise ., The infinitely fast rise of the postsynaptic potential in the integrate-and-fire model leads to the immediate emission of a spike with finite probability ., For excitatory inputs , this probability depends supra-linearly on the amplitude of the synaptic impulse and it is zero for inhibitory impulses ., The supra-linear increase for small positive impulse amplitudes relates to the fact that the membrane potential density decreases towards threshold: the probability to instantaneously emit a spike equals the integral of the density shifted over the threshold ., The detailed shape of the density below threshold therefore determines the response properties ., For Gaussian white noise synaptic background , the model still displays an instantaneous response ., However , since in this case the density vanishes at threshold , the response probability to lowest order grows quadratically in the amplitude of a synaptic impulse ., This is the reason why previous work based on linear response theory did not report on the existence of an instantaneous component when modulating the mean input and on the contrary characterized the nerve cell as a low-pass in this case 22 , 23 ., Modulation of the noise amplitude , however , has been shown to cause an instantaneous response in linear approximation in the diffusion limit 23 , confirmed experimentally in real neurons 55 ., While linear response theory has proven extremely useful to understand recurrent neural networks 29 , the categorization of the integrate-and-fire neurons response kernel as a low-pass is misleading , because it suggests the absence of an immediate response ., Furthermore we find that in addition to the nature of the background noise , response properties also depend on its amplitude: a certain level of noise optimally promotes the spiking response ., Hence noise facilitates the transmission of the input to the output of the neuron ., This is stochastic resonance in the general sense of the term as recently suggested 28 ., As noted in the introduction , stochastic resonance of the linear response kernel has previously been demonstrated for sinusoidal input currents and Gaussian white background noise 23 ., Furthermore , also slow aperiodic transients are facilitated by stochastic resonance in the integrate-and-fire neuron 52 ., We extend the known results in two respects ., Firstly , we show that the linear response shows aperiodic stochastic resonance also for fast transients ., Secondly , we demonstrate tha","headings":"Introduction, Model, Results, Discussion","abstract":"Contemporary theory of spiking neuronal networks is based on the linear response of the integrate-and-fire neuron model derived in the diffusion limit ., We find that for non-zero synaptic weights , the response to transient inputs differs qualitatively from this approximation ., The response is instantaneous rather than exhibiting low-pass characteristics , non-linearly dependent on the input amplitude , asymmetric for excitation and inhibition , and is promoted by a characteristic level of synaptic background noise ., We show that at threshold the probability density of the potential drops to zero within the range of one synaptic weight and explain how this shapes the response ., The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled networks of threshold units .","summary":"Our work demonstrates a fast-firing response of nerve cells that remained unconsidered in network analysis , because it is inaccessible by the otherwise successful linear response theory ., For the sake of analytic tractability , this theory assumes infinitesimally weak synaptic coupling ., However , realistic synaptic impulses cause a measurable deflection of the membrane potential ., Here we quantify the effect of this pulse-coupling on the firing rate and the membrane-potential distribution ., We demonstrate how the postsynaptic potentials give rise to a fast , non-linear rate transient present for excitatory , but not for inhibitory , inputs ., It is particularly pronounced in the presence of a characteristic level of synaptic background noise ., We show that feed-forward inhibition enhances the fast response on the network level ., This enables a mode of information processing based on short-lived activity transients ., Moreover , the non-linear neural response appears on a time scale that critically interacts with spike-timing dependent synaptic plasticity rules ., Our results are derived for biologically realistic synaptic amplitudes , but also extend earlier work based on Gaussian white noise ., The novel theoretical framework is generically applicable to any threshold unit governed by a stochastic differential equation driven by finite jumps ., Therefore , our results are relevant for a wide range of biological , physical , and technical systems .","keywords":"biophysics\/theory and simulation, neuroscience\/theoretical neuroscience, computational biology\/computational neuroscience","toc":null} +{"Unnamed: 0":2010,"id":"journal.pcbi.1000776","year":2010,"title":"Assimilating Seizure Dynamics","sections":"A universal dilemma in understanding the brain is that it is complex , multiscale , nonlinear in space and time , and we never have more than partial experimental access to its dynamics ., To better understand its function one not only needs to encompass the complexity and nonlinearity , but also estimate the unmeasured variables and parameters of brain dynamics ., A parallel comparison can be drawn in weather forecasting 1 , although atmospheric dynamics are arguably less complex and less nonlinear ., Fortunately , the meteorological community has overcome some of these issues by using model based predictor-controller frameworks whose development derived from computational robotics requirements of aerospace programs in 1960s 2 , 3 ., A predictor-controller system employs a computational model to observe a dynamical system ( e . g . weather ) , assimilate data through what may be relatively sparse sensors , and reconstruct and estimate the remainder of the unmeasured variables and parameters in light of available data ., The result of future measured system dynamics is compared with the model predicted outcome , the expected errors within the model are updated and corrected , and the process repeats iteratively ., For this recursive initial value problem to be meaningful one needs computational models of high fidelity to the dynamics of the natural systems , and explicit modeling of the uncertainties within the model and measurements 3\u20135 ., The most prominent of the model based predictor-controller strategies is the Kalman filter ( KF ) 2 ., In its original form , the KF solves a linear system ., In situations of mild nonlinearity , the extended forms of the KF were used where the system equations could be linearized without losing too much of the qualitative nature of the system ., Such linearization schemes are not suitable for neuronal systems with nonlinearities of the scale of action potential spike generation ., With the advent of efficient nonlinear techniques in the 1990s such as the ensemble Kalman filter 6 , 7 and the unscented Kalman filter ( UKF ) 8 , 9 , along with improved computational models for the dynamics of neuronal systems ( incorporating synaptic inputs , cell types , and dynamic microenvironment ) 10 , the prospects for biophysically based ensemble filtering from neuronal systems are now strong ., The general framework of the UKF differs from the extended KF in that it integrates the fundamental nonlinear models directly , along with iterating the error and noise expectations through these nonlinear equations ., Instead of linearizing the system equations , UKF performs the prediction and update steps on an ensemble of potential system states ., This ensemble gives a finite sampling representation of the probability distribution function of the system state 3 , 11\u201315 ., Our hypothesis is that seizures arise from a complex nonlinear interaction between specific excitatory and inhibitory neuronal sub-types 16 ., The dynamics and excitability of such networks are further complicated by the fact that a variety of metabolic processes govern the excitability of those neuronal networks ( such as potassium concentration ( ) gradients and local oxygen availability ) , and these metabolic variables are not directly measurable using electrical potential measurements ., Indeed , it is becoming increasingly apparent that electricity is not enough to describe a wide variety of neuronal phenomena ., Several seizure prediction algorithms , based only on EEG signals , have achieved reasonable accuracy when applied to static time-series 17\u201319 ., However , many techniques are hindered by high false positive rates , which render them unsuitable for clinical use ., We presume that there are aspects of the dynamics of seizure onset and pre-seizure states that are not captured in current models when applied in real-time ., In light of the dynamic nature of epilepsy , an approach that incorporates the time evolution of the underlying system for seizure prediction is required ., As one cannot see much of an anticipatory signature in EEG dynamics prior to seizures , the same can be said of a variety of oscillatory transient phenomena in the nervous system ranging from up states 20 , spinal cord burst firing 21 , cortical oscillatory waves 22 , in addition to animal 23 and human 24 epileptic seizures ., All of these phenomena share the properties that they are episodic , oscillatory , and have apparent refractory periods following which small stimuli can both start and stop such events ., It has recently been shown that the interrelated dynamics of and sodium concentration ( ) affect the excitability of neurons , help determine the occurrence of seizures , and affect the stability of persistent states of neuronal activity 10 , 25 ., Competition between intrinsic neuronal ion currents , sodium-potassium pumps , glia , and diffusion can produce slow and large-amplitude oscillations in ion concentrations similar to what is observed physiologically in seizures 26 , 27 ., Brain dynamics emerge from within a system of apparently unique complexity among the natural systems we observe ., Even as multivariable sensing technology steadily improves , the near infinite dimensionality of the complex spatial extent of brain networks will require reconstruction through modeling ., Since at present , our technical capabilities restrict us to only one or two variables at a restricted number of sites ( such as voltage or calcium ) , computational models become the \u201clens\u201d through which we must consider viewing all brain measurements 28 ., In what follows , we will show the potential power of fusing physiological measurements with computational models ., We will use reconstruction to account for unmeasured parts of the neuronal system , relating micro-domain metabolic processes to cellular excitability , and validating cellular dynamical reconstruction against actual measurements ., Model inadequacy is an issue of intense research in the data assimilation community \u2013 no model does exactly what nature does ., To deal with inadequate models , researchers in areas such as meteorology have developed various strategies to account for the inaccuracies in the models for weather forecasting 4 , 5 , 29 ., In complex systems such as neuronal networks , the need to account for model inadequacy is critical ., To demonstrate that UKF can track neuronal dynamics in the face of moderate inadequacy , we impaired our model by setting the sodium current rate constant instead of using the actual complex function of , ( see equation ( 2 ) for the functional form of ) , and tracked it as a parameter ( Figure 3 ) ., That is , we deleted the relevant function for from the model and allowed UKF to update it as a parameter ., The model with fixed is by itself unable to spike , but when it is allowed to float when voltage is assimilated through UKF using the data from hippocampal pyramidal cells ( PCs ) , it is capable of tracking the dynamics of the cell reasonably well ., The tracked by the filter is sufficiently close to its functional form values ( within 25% ) so that spiking dynamics can be reconstructed ( Figure 3C and 3D ) ., This occurs because Kalman filtering constantly estimates the trade off between model accuracy and measurements , expressed in the filter gain function 2 , 3 ., This is an excellent demonstration of the robustness of this framework ., Looking at the estimated values of it also becomes clear that in fact should be assigned the functional form rather than a constant value ., Despite decades of effort neuroscientists lack a unifying dynamical principle for epilepsy ., An incomplete knowledge of the neural interactions during seizures makes the quest for unifying principles especially difficult 30 ., Here we show that UKF can be employed to track experimentally inaccessible neuronal dynamics during seizures ., Specifically , we used UKF to assimilate data from pairs of simultaneously impaled pyramidal cells and oriens-lacunosum moleculare ( OLM ) interneurons ( INs ) in the CA1 area of the hippocampus 23 ., We then used biophysical ionic models to estimate extra- and intracellular potassium , sodium , and calcium ion concentrations and various parameters controlling their dynamics during seizures ( Figure 4 ) ., In Figure 4A we show an intracellular recording from a pyramidal cell during seizures , and plot the estimated extracellular potassium concentration ( ) in Figure 4B ., As is clear from the figure the extracellular potassium concentration oscillates as the cell goes into and out of seizures ., The potassium concentration begins to rise as the cell enters seizures and peaks with the maximal firing frequency , followed by decreasing potassium concentration as the firing rate decreases and the seizure terminates ., Higher makes the PC more excitable by raising the reversal potential for currents ( equation 7 ) ., The increased reversal potential causes the cell to burst-fire spontaneously ., Whether the increased causes the cells to seize or is the result of seizures has been an old question 31 whose resolution will likely take place from better understanding of the coupled dynamics ., For present purposes , it is known that increased in experiments can support the generation , and increase the frequency and propagation velocity of seizures 32 , 33 ., Changes in the concentration of intracellular sodium ions , , are closely coupled with the changes of ( Figure 4C ) ., As shown in panels ( 4D\u2013F ) we reconstructed the parameters controlling the microenvironment of the cell ., These parameters included the diffusion constant of in the extracellular space , buffering strength of glia , and concentration in the reservoir of the perfusing solution in vitro ( or in the vasculature in vivo ) during seizures ., Note that the ionic concentration in the distant reservoir is different from the more rapid dynamics within the smaller connecting extracellular space near single cell where excitability is determined ., We were also able to track other variables and parameters such as extracellular calcium concentration and ion channel conductances ., In Figure 5 , we show an expanded view of a single cell response during a single seizure from Figure 4 ., Extracellular potassium concentration increases several fold above baseline values during seizures 31 ., During a single seizure , starts rising from a baseline value of 3 . 0mM as the seizure begins and peaks at 7mM at the middle of the seizure ( Figure 5 ) ., Interestingly the estimated by UKF matches very closely the measured seen in vitro studies 34 ., Considering the slow time scale of seizure evolution ( period of more than 100 seconds in our experiments ) , we test the importance of slow variables such as ion concentrations for seizure tracking ., As shown in Figure 6 , we found that including the dynamic intra- and extracellular ion concentrations in the model is necessary for accurate tracking of seizures ., Using Hodgkin-Huxley type ionic currents with fixed intra- and extracellular ion concentration of and ions fails to track seizure dynamics in pyramidal cells ( Figure 6C ) ., We used physiologically normal concentrations of 4mM and 18mM for extracellular and intracellular respectively for these simulations ., The conclusion remains the same when higher and are used ., A similar tracking failure is found while tracking the dynamics of OLM interneurons during seizures ( not shown ) ., To further emphasize the importance of ion concentrations dynamics for tracking seizures we calculate the Akaikes information criterion ( AIC ) for the two models used in Figure 6 , i . e . the model with and without ion concentration dynamics ., AIC is a measure of the goodness of fit of a model and offers a measure of the information lost when a given model is used to describe experimental observations ., Loosely speaking , it describes the tradeoff between precision and complexity of the model 35 ., We used equation ( 29 ) for the AIC measure ., The AIC measure for the model without ion concentration dynamics is ., The model with ion concentration dynamics on the other hand has AIC value equal to , indicating the importance of ion concentration dynamics for tracking seizures ., Pyramidal cells and interneurons in the hippocampus reside in different layers with different cell densities ., To investigate whether there exist significant differences in the microenvironment surrounding these two cell types we assimilated membrane potential data from OLM interneurons in the hippocampus and reconstructed and ion concentrations inside and outside the cells ., As shown in Figure 7 , both the baseline level and peak near the interneurons must be very high as compared to that seen for the pyramidal cells ( cf . Figure 4B ) ., This is an important prediction in light of the recently observed interplay between pyramidal cells and interneurons during in vitro seizures 23; in these experiments pyramidal cells were silent when the interneurons were intensively firing ., Following intense firing the interneurons entered a state of depolarization block simultaneously with the emergence of intense epileptiform firing in pyramidal cells ., Such a novel pattern of interleaving neuronal activity is proposed to be a possible mechanism for the sudden drop in inhibition during seizures \u2013 it may be permissive of runaway excitatory activity ., The mechanism leading to such interplay , specifically the reasons for differential firing patterns in pyramidal cells and interneurons are unknown ., Our results here indicate the potential role of the neuronal microenvironment in producing such interplay ., Our findings suggest that the buffering mechanism in the OLM layer is weaker as compared with the pyramidal layer , thus causing higher in the OLM layer ., The higher surrounding the interneurons causes increased excitability of the cell by raising the reversal potential for currents ( higher than the pyramidal cells , see equation 7 ) ., The higher reversal potential for currents causes the interneuron to spontaneously burst fire at higher frequency and eventually drives the interneuron to transition into depolarization block when firing is peaked ., As the INs enter the depolarized state , the inhibitory synaptic input from the INs to the PCs drops substantially , releasing PCs to generate the intense excitatory activity of seizures ( equation 8 , Figure S3 ) ., The collapse of inhibition due to the entrance of INs into a depolarized state also helps explain the sudden decrease in inhibition at seizure onset in neocortex described by Trevelyan , et al . 36 as the loss of inhibitory veto ., As shown in Figure S1 , we also tracked the remaining variables for the INs ., Since the interaction of neurons determines network patterns of activity , it is within such interactions that we seek unifying principles for epilepsy ., To demonstrate that the UKF framework can be utilized to study cellular interactions , we reconstructed the dynamics of one cell type by assimilating the measured data from another cell type in the network ., In Figure 8 we only show the estimated membrane potentials , but we also reconstructed the remaining variables and parameters of both cells ( Figures S2 and S3 ) ., We first assimilated the membrane potential of the PC to estimate the dynamics of the same cell and also the dynamics of a coupled IN ( Figure 8A\u2013D ) ., Conversely , we estimate the dynamics of PC from the simultaneously measured membrane potential measurements of the IN ( Figure 8D\u2013F ) ., As is evident from Figure 8 the filter framework is successful at reciprocally reconstructing and tracking the dynamics of these different cells within this network ., In Figure S2 , we show intracellular concentration and gating variables of and channels in PCs for simulation in Figure 8A\u2013D ., The variables modeling the synaptic inputs for both INs and PCs in Figure 8A\u2013D are shown in Figure S3 ., As clear from Figure S3 ( D ) , the variable ( equation 8 ) reaches very high values when the INs lock into depolarization block , shutting off the inhibitory inputs from INs to PCs ., In conclusion , we have demonstrated the feasibility for data assimilation within neuronal networks using detailed biophysical models ., In particular , we demonstrated that estimating the neuronal microenvironment and neuronal interactions can be performed by embedding our improving biophysical neuronal models within a model based state estimation framework ., This approach can provide a more complete understanding of otherwise incompletely observed neuronal dynamics during normal and pathological brain function ., We used two-compartmental models for the pyramidal cells and interneurons: a cellular compartment and the surrounding extracellular microenvironment ., The membrane potentials of both cells were modeled by Hodgkin-Huxley equations containing sodium , potassium , calcium-gated potassium ( after-hyperpolarization ) , and leak currents ., For the network model , the two cell types are coupled synaptically and through diffusion of potassium ions in the extracellular space ., A schematic of the model is shown in Figure 9 ., To estimate and track the dynamics of the neuronal networks , we applied a nonlinear ensemble version of the Kalman filter , the unscented Kalman filter ( UKF ) 8 , 9 ., The UKF uses known nonlinear dynamical equations and observation functions along with noisy , partially observed data to continuously update a Gaussian approximation for the neuronal state and its uncertainty ., At each integration step , perturbed system states that are consistent with the current state uncertainty , sigma points , are chosen ., The UKF consists of integrating the system from the sigma points , estimating mean state values , and then updating the covariance matrix that approximates the state uncertainty ., The Kalman gain matrix updates the new most likely state of the system based on the estimated measurements and the actual partially measured state ., The estimated states ( filtered states ) are used to estimate the experimentally inaccessible parameters and variables by synchronizing the model equations to the estimated states ., To estimate the system parameters from data , we introduced the unknown parameters as extra state variables with trivial dynamics ., The UKF with random initial conditions for the parameters will converge to an optimal set of parameters , or in the case of varying parameters , will track them along with the state variables 11\u201313 ., Given a function describing the dynamics of the system ( equations 1\u201310 in our case ) , and an observation function contaminated by uncertainty characterized in the covariance matrix , for a -dimensional state vector with mean the UKF generates the sigma points , \u2026 , so that their sample mean and sample covariance are and ., The sigma points are the rows of the matrix ( 11 ) The index on the left-hand side corresponds to the row taken from the matrix in the parenthesis on right-hand side ., The square root sign denotes the matrix square root and indicates transpose of the matrix ., Sigma points can be envisioned as sample points at the boundaries of a covariance ellipsoid ., In what follows , superscript tilde ( ) represents the a priori values of variables and parameter , i . e . the values at a given time-step when observation up to time-step are available , while hat ( ) represents the a posteriori quantities , i . e . the values at time-step when observations up to time-step are available ., Applying one step of the dynamics to the sigma points and calling the results , and denoting the observations of the new states by , we define the means ( 12 ) where and are the a priori state and measurement estimates , respectively ., Now define the a priori covariances ( 13 ) of the ensemble members ., The Kalman filter estimates of the new state and uncertainty are given by the a posteriori quantities ( 14 ) and ( 15 ) where is the Kalman gain matrix and is the actual observation 3 , 8 , 9 , 11\u201313 ., Thus and are the updated estimated state and covariance for the next step ., The a posteriori estimate of the observation is recovered by ., Thus by augmenting the observed state variables with unobserved state variables and system parameters , UKF can estimate and track both unobserved variables and system parameters .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Observability of a dynamical system requires an understanding of its state\u2014the collective values of its variables ., However , existing techniques are too limited to measure all but a small fraction of the physical variables and parameters of neuronal networks ., We constructed models of the biophysical properties of neuronal membrane , synaptic , and microenvironment dynamics , and incorporated them into a model-based predictor-controller framework from modern control theory ., We demonstrate that it is now possible to meaningfully estimate the dynamics of small neuronal networks using as few as a single measured variable ., Specifically , we assimilate noisy membrane potential measurements from individual hippocampal neurons to reconstruct the dynamics of networks of these cells , their extracellular microenvironment , and the activities of different neuronal types during seizures ., We use reconstruction to account for unmeasured parts of the neuronal system , relating micro-domain metabolic processes to cellular excitability , and validate the reconstruction of cellular dynamical interactions against actual measurements ., Data assimilation , the fusing of measurement with computational models , has significant potential to improve the way we observe and understand brain dynamics .","summary":"To understand a complex system such as the weather or the brain , one needs an exhaustive detailing of the system variables and parameters ., But such systems are vastly undersampled from existing technology ., The alternative is to employ realistic computational models of the system dynamics to reconstruct the unobserved features ., This model based state estimation is referred to as data assimilation ., Modern robotics use data assimilation as the recursive predictive strategy that underlies the autonomous control performance of aerospace and terrestrial applications ., We here adapt such data assimilation techniques to a computational model of the interplay of excitatory and inhibitory neurons during epileptic seizures ., We show that incorporating lower scale metabolic models of potassium dynamics is essential for accuracy ., We apply our strategy using data from simultaneous dual intracellular impalements of inhibitory and excitatory neurons ., Our findings are , to our knowledge , the first validation of such data assimilation in neuronal dynamics .","keywords":"neuroscience\/theoretical neuroscience, computational biology\/computational neuroscience, neurological disorders\/epilepsy","toc":null} +{"Unnamed: 0":516,"id":"journal.pcbi.1000352","year":2009,"title":"Statistical Methods for Detecting Differentially Abundant Features in Clinical Metagenomic Samples","sections":"The increasing availability of high-throughput , inexpensive sequencing technologies has led to the birth of a new scientific field , metagenomics , encompassing large-scale analyses of microbial communities ., Broad sequencing of bacterial populations allows us a first glimpse at the many microbes that cannot be analyzed through traditional means ( only \u223c1% of all bacteria can be isolated and independently cultured with current methods 1 ) ., Studies of environmental samples initially focused on targeted sequencing of individual genes , in particular the 16S subunit of ribosomal RNA 2\u20135 , though more recent studies take advantage of high-throughput shotgun sequencing methods to assess not only the taxonomic composition , but also the functional capacity of a microbial community 6\u20138 ., Several software tools have been developed in recent years for comparing different environments on the basis of sequence data ., DOTUR 9 , Libshuff 10 , \u222b-libshuff 11 , SONs 12 , MEGAN 13 , UniFrac 14 , and TreeClimber 15 all focus on different aspects of such an analysis ., DOTUR clusters sequences into operational taxonomic units ( OTUs ) and provides estimates of the diversity of a microbial population thereby providing a coarse measure for comparing different communities ., SONs extends DOTUR with a statistic for estimating the similarity between two environments , specifically , the fraction of OTUs shared between two communities ., Libshuff and \u222b-libshuff provide a hypothesis test ( Cramer von Mises statistics ) for deciding whether two communities are different , and TreeClimber and UniFrac frame this question in a phylogenetic context ., Note that these methods aim to assess whether , rather than how two communities differ ., The latter question is particularly important as we begin to analyze the contribution of the microbiome to human health ., Metagenomic analysis in clinical trials will require information at individual taxonomic levels to guide future experiments and treatments ., For example , we would like to identify bacteria whose presence or absence contributes to human disease and develop antibiotic or probiotic treatments ., This question was first addressed by Rodriguez-Brito et al . 16 , who use bootstrapping to estimate the p-value associated with differences between the abundance of biological subsytems ., More recently , the software MEGAN of Huson et al . 13 provides a graphical interface that allows users to compare the taxonomic composition of different environments ., Note that MEGAN is the only one among the programs mentioned above that can be applied to data other than that obtained from 16S rRNA surveys ., These tools share one common limitation \u2014 they are all designed for comparing exactly two samples \u2014 therefore have limited applicability in a clinical setting where the goal is to compare two ( or more ) treatment populations each comprising multiple samples ., In this paper , we describe a rigorous statistical approach for detecting differentially abundant features ( taxa , pathways , subsystems , etc . ) between clinical metagenomic datasets ., This method is applicable to both high-throughput metagenomic data and to 16S rRNA surveys ., Our approach extends statistical methods originally developed for microarray analysis ., Specifically , we adapt these methods to discrete count data and correct for sparse counts ., Our research was motivated by the increasing focus of metagenomic projects on clinical applications ( e . g . Human Microbiome Project 17 ) ., Note that a similar problem has been addressed in the context of digital gene expression studies ( e . g . SAGE 18 ) ., Lu et al . 19 employ an overdispersed log-linear model and Robinson and Smyth 20 use a negative binomial distribution in the analysis of multiple SAGE libraries ., Both approaches can be applied to metagenomic datasets ., We compare our tool to these prior methodologies through comprehensive simulations , and demonstrate the performance of our approach by analyzing publicly available datasets , including 16S surveys of human gut microbiota and random sequencing-based functional surveys of infant and mature gut microbiomes and microbial and viral metagenomes ., The methods described in this paper have been implemented as a web server and are also available as free source-code ( in R ) from http:\/\/metastats . cbcb . umd . edu ., To account for different levels of sampling across multiple individuals , we convert the raw abundance measure to a fraction representing the relative contribution of each feature to each of the individuals ., This results in a normalized version of the matrix described above , where the cell in the ith row and the jth column ( which we shall denote fij ) is the proportion of taxon i observed in individual j ., We chose this simple normalization procedure because it provides a natural representation of the count data as a relative abundance measure , however other normalization approaches can be used to ensure observed counts are comparable across samples , and we are currently evaluating several such approaches ., For each feature i , we compare its abundance across the two treatment populations by computing a two-sample t statistic ., Specifically , we calculate the mean proportion , and variance of each treatment t from which nt subjects ( columns in the matrix ) were sampled:We then compute the two-sample t statistic:Features whose t statistics exceeds a specified threshold can be inferred to be differentially abundant across the two treatments ( two-sided t-test ) ., The threshold for the t statistic is chosen such as to minimize the number of false positives ( features incorrectly determined to be differentially abundant ) ., Specifically , we try to control the p-value\u2014the likelihood of observing a given t statistic by chance ., Traditional analyses compute the p-value using the t distribution with an appropriate number of degrees of freedom ., However , an implicit assumption of this procedure is that the underlying distribution is normal ., We do not make this assumption , but rather estimate the null distribution of ti non-parametrically using a permutation method as described in Storey and Tibshirani 21 ., This procedure , also known as the nonparametric t-test has been shown to provide accurate estimates of significance when the underlying distributions are non-normal 22 , 23 ., Specifically , we randomly permute the treatment labels of the columns of the abundance matrix and recalculate the t statistics ., Note that the permutation maintains that there are n1 replicates for treatment 1 and n2 replicates for treatment 2 ., Repeating this procedure for B trials , we obtain B sets of t statistics: t10b , \u2026 , tM0b , b\\u200a=\\u200a1 , \u2026 , B , where M is the number of rows in the matrix ., For each row ( feature ) , the p-value associated with the observed t statistic is calculated as the fraction of permuted tests with a t statistic greater than or equal to the observed ti:This approach is inadequate for small sample sizes in which there are a limited number of possible permutations of all columns ., As a heuristic , if less than 8 subjects are used in either treatment , we pool all permuted t statistics together into one null distribution and estimate p-values as: Note that the choice of 8 for the cutoff is simply heuristic based on experiments during the implementation of our method ., Our approach is specifically targeted at datasets comprising multiple subjects \u2014 for small data-sets approaches such as that proposed by Rodriguez-Brito et . al . 16 might be more appropriate ., Unless explicitly stated , all experiments described below used 1000 permutations ., In general , the number of permutations should be chosen as a function of the significance threshold used in the experiment ., Specifically , a permutation test with B permutations can only estimate p-values as low as 1\/B ( in our case 10\u22123 ) ., In datasets containing many features , larger numbers of permutations are necessary to account for multiple hypothesis testing issues ( further corrections for this case are discussed below ) ., Precision of the p-value calculations is obviously improved by increasing the number of permutations used to approximate the null distribution , at a cost , however , of increased computational time ., For certain distributions , small p-values can be efficiently estimated using a technique called importance sampling ., Specifically , the permutation test is targeted to the tail of the distribution being estimated , leading to a reduction in the number of permutations necessary of up to 95% 24 , 25 ., We intend to implement such an approach in future versions of our software ., For complex environments ( many features\/taxa\/subsystems ) , the direct application of the t statistic as described can lead to large numbers of false positives ., For example , choosing a p-value threshold of 0 . 05 would result in 50 false positives in a dataset comprising 1000 organisms ., An intuitive correction involves decreasing the p-value cutoff proportional to the number of tests performed ( a Bonferroni correction ) , thereby reducing the number of false positives ., This approach , however , can be too conservative when a large number of tests are performed 21 ., An alternative approach aims to control the false discovery rate ( FDR ) , which is defined as the proportion of false positives within the set of predictions 26 , in contrast to the false positive rate defined as the proportion of false positives within the entire set of tests ., In this context , the significance of a test is measured by a q-value , an individual measure of the FDR for each test ., We compute the q-values using the following algorithm , based on Storey and Tibshirani 21 ., This method assumes that the p-values of truly null tests are uniformly distributed , assumption that holds for the methods used in Metastats ., Given an ordered list of p-values , p ( 1 ) \u2264p ( 2 ) \u2264\u2026\u2264p ( m ) , ( where m is the total number of features ) , and a range of values \u03bb\\u200a=\\u200a0 , 0 . 01 , 0 . 02 , \u2026 , 0 . 90 , we computeNext , we fit with a cubic spline with 3 degrees of freedom , which we denote , and let ., Finally , we estimate the q-value corresponding to each ordered p-value ., First , ., Then for i\\u200a=\\u200am-1 , m-2 , \u2026 , 1 , Thus , the hypothesis test with p-value has a corresponding q-value of ., Note that this method yields conservative estimates of the true q-values ,, i . e ., ., Our software provides users with the option to use either p-value or q-value thresholds , irrespective of the complexity of the data ., For low frequency features , e . g . low abundance taxa , the nonparametric t\u2013test described above is not accurate 27 ., We performed several simulations ( data not shown ) to determine the limitations of the nonparametric t-test for sparsely-sampled features ., Correspondingly , our software only applies the test if the total number of observations of a feature in either population is greater than the total number of subjects in the population ( i . e . the average across subjects of the number of observations for a given feature is greater than one ) ., We compare the differential abundance of sparsely-sampled ( rare ) features using Fishers exact test ., Fishers exact test models the sampling process according to a hypergeometric distribution ( sampling without replacement ) ., The frequencies of sparse features within the abundance matrix are pooled to create a 2\u00d72 contingency table ( Figure 2 ) , which acts as input for a two-tailed test ., Using the notation from Figure 2 , the null hypergeometric probability of observing a 2\u00d72 contingency table is: By calculating this probability for a given table , and all tables more extreme than that observed , one can calculate the exact probability of obtaining the original table by chance assuming that the null hypothesis ( i . e . no differential abundance ) is true 27 ., Note that an alternative approach to handling sparse features is proposed in microarray literature ., The Significance Analysis of Microarrays ( SAM ) method 28 addresses low levels of expression using a modified t statistic ., We chose to use Fishers exact test due to the discrete nature of our data , and because prior studies performed in the context of digital gene expression indicate Fishers test to be effective for detection of differential abundance 29 ., The input to our method , the Feature Abundance Matrix , can be easily constructed from both 16S rRNA and random shotgun data using available software packages ., Specifically for 16S taxonomic analysis , tools such as the RDP Bayesian classifier 30 and Greengenes SimRank 31 output easily-parseable information regarding the abundance of each taxonomic unit present in a sample ., As a complementary , unsupervised approach , 16S sequences can be clustered with DOTUR 9 into operational taxonomic units ( OTUs ) ., Abundance data can be easily extracted from the \u201c* . list\u201d file detailing which sequences are members of the same OTU ., Shotgun data can be functionally or taxonomically classified using MEGAN 13 , CARMA 32 , or MG-RAST 33 ., MEGAN and CARMA are both capable of outputting lists of sequences assigned to a taxonomy or functional group ., MG-RAST provides similar information for metabolic subsystems that can be downloaded as a tab-delimited file ., All data-types described above can be easily converted into a Feature Abundance Matrix suitable as input to our method ., In the future we also plan to provide converters for data generated by commonly-used analysis tools ., Human gut 16S rRNA sequences were prepared as described in Eckburg et al . and Ley et al . ( 2006 ) and are available in GenBank , accession numbers: DQ793220-DQ802819 , DQ803048 , DQ803139-DQ810181 , DQ823640-DQ825343 , AY974810-AY986384 ., In our experiments we assigned all 16S sequences to taxa using a na\u00efve Bayesian classifier currently employed by the Ribosomal Database Project II ( RDP ) 30 ., COG profiles of 13 human gut microbiomes were obtained from the supplementary material of Kurokawa et al . 34 ., We acquired metabolic functional profiles of 85 metagenomes from the online supplementary materials of Dinsdale et al . ( 2008 ) ( http:\/\/www . theseed . org\/DinsdaleSupplementalMaterial\/ ) ., As outlined in the introduction , statistical packages developed for the analysis of SAGE data are also applicable to metagenomic datasets ., In order to validate our method , we first designed simulations and compared the results of Metastats to Students t-test ( with pooled variances ) and two methods used for SAGE data: a log-linear model ( Log-t ) by Lu et al . 19 , and a negative binomial ( NB ) model developed by Robinson and Smyth 20 ., We designed a metagenomic simulation study in which ten subjects are drawn from two groups - the sampling depth of each subject was determined by random sampling from a uniform distribution between 200 and 1000 ( these depths are reasonable for metagenomic studies ) ., Given a population mean proportion p and a dispersion value \u03c6 , we sample sequences from a beta-binomial distribution \u0392 ( \u03b1 , \u03b2 ) , where \u03b1\\u200a=\\u200ap ( 1\/\u03c6\u22121 ) and \u03b2\\u200a= ( 1\u2212p ) ( 1\/\u03c6\u22121 ) ., Note that data from this sampling procedure fits the assumptions for Lu et al . as well as Robinson and Smyth and therefore we expect them to do well under these conditions ., Lu et al . designed a similar study for SAGE data , however , for each simulation , a fixed dispersion was used for both populations and the dispersion estimates were remarkably small ( \u03c6\\u200a=\\u200a0 , 8e-06 , 2e-05 , 4 . 3e-05 ) ., Though these values may be reasonable for SAGE data , we found that they do not accurately model metagenomic data ., Figure 3 displays estimated dispersions within each population for all features of the metagenomic datasets examined below ., Dispersion estimates range from 1e-07 to 0 . 17 , and rarely do the two populations share a common dispersion ., Thus we designed our simulation so that \u03c6 is chosen for each population randomly from a uniform distribution between 1e-08 and 0 . 05 , allowing for potential significant differences between population distributions ., For each set of parameters , we simulated 1000 feature counts , 500 of which are generated under p1\\u200a=\\u200ap2 , the remainder are differentially abundant where a*p1\\u200a=\\u200ap2 , and compared the performance of each method using receiver-operating-characteristic ( ROC ) curves ., Figure 4 displays the ROC results for a range of values for p and a ., For each set of parameters , Metastats was run using 5000 permutations to compute p-values ., Metastats performs as well as other methods , and in some cases is preferable ., We also found that in most cases our method was more sensitive than the negative binomial model , which performed poorly for high abundance features ., Our next simulation sought to examine the accuracy of each method under extreme sparse sampling ., As shown in the datasets below , it is often the case that a feature may not have any observations in one population , and so it is essential to employ a statistical method that can address this frequent characteristic of metagenomic data ., Under the same assumptions as the simulation above , we tested a\\u200a=\\u200a0 and 0 . 01 , thereby significantly reducing observations of a feature in one of the populations ., The ROC curves presented in Figure 5 reveal that Metastats outperforms other statistical methods in the face of extreme sparseness ., Holding the false positive rate ( x-axis ) constant , Metastats shows increased sensitivity over all other methods ., The poor performance of Log-t is noteworthy given it is designed for SAGE data that is also potentially sparse ., Further investigation revealed that the Log-t method results in a highly inflated dispersion value if there are no observations in one population , thereby reducing the estimated significance of the test ., Finally , we selected a subset of the Dinsdale et al . 6 metagenomic subsystem data ( described below ) , and randomly assigned each subject to one of two populations ( 20 subjects per population ) ., All subjects were actually from the same population ( microbial metagenomes ) , thus the null hypothesis is true for each feature tested ( no feature is differentially abundant ) ., We ran each methodology on this data , recording computed p-values for each feature ., Repeating this procedure 200 times , we simulated tests of 5200 null features ., Table 1 displays the number of false positives incurred by each methodology given different p-value thresholds ., The results indicate that the negative binomial model results in an exceptionally high number of false positives relative to the other methodologies ., Students t-test and Metastats perform equally well in estimating the significance of these null features , while Log-t performs slightly better ., These studies show that Metastats consistently performs as well as all other applicable methodologies for deeply-sampled features , and outperforms these methodologies on sparse data ., Below we further evaluate the performance of Metastats on several real metagenomic datasets ., In a recent study , Ley et al . 35 identified gut microbes associated with obesity in humans and concluded that obesity has a microbial element , specifically that Firmicutes and Bacteroidetes are bacterial divisions differentially abundant between lean and obese humans ., Obese subjects had a significantly higher relative abundance of Firmicutes and a lower relative abundance of Bacteriodetes than the lean subjects ., Furthermore , obese subjects were placed on a calorie-restricted diet for one year , after which the subjects gut microbiota more closely resembled that of the lean individuals ., We obtained the 20 , 609 16S rRNA genes sequenced in Ley et al . and assigned them to taxa at different levels of resolution ( note that 2 , 261 of the 16S sequences came from a previous study 36 ) ., We initially sought to re-establish the primary result from this paper using our methodology ., Table 2 illustrates that our method agreed with the results of the original study: Firmicutes are significantly more abundant in obese subjects ( P\\u200a=\\u200a0 . 003 ) and Bacteroidetes are significantly more abundant in the lean population ( P<0 . 001 ) ., Furthermore , our method also detected Actinobacteria to be differentially abundant , a result not reported by the original study ., Approximately 5% of the sample was composed of Actinobacteria in obese subjects and was significantly less frequent in lean subjects ( P\\u200a=\\u200a0 . 004 ) ., Collinsella and Eggerthella were the most prevalent Actinobacterial genera observed , both of which were overabundant in obese subjects ., These organisms are known to ferment sugars into various fatty acids 37 , further strengthening a possible connection to obesity ., Note that the original study used Students t-test , leading to a p-value for the observed difference within Actinobacteria of 0 . 037 , 9 times larger than our calculation ., This highlights the sensitivity of our method and explains why this difference was not originally detected ., To explore whether we could refine the broad conclusions of the initial study , we re-analyzed the data at more detailed taxonomic levels ., We identified three classes of organisms that were differentially abundant: Clostridia ( P\\u200a=\\u200a0 . 005 ) , Bacteroidetes ( P<0 . 001 ) , and Actinobacteria ( P\\u200a=\\u200a0 . 003 ) ., These three were the dominant members of the corresponding phyla ( Firmicutes , Bacteroides , Actinobacteria , respectively ) and followed the same distribution as observed at a coarser level ., Metastats also detected nine differentially abundant genera accounting for more than 25% of the 16S sequences sampled in both populations ( P\u22640 . 01 ) ., Syntrophococcus , Ruminococcus , and Collinsella were all enriched in obese subjects , while Bacteroides on average were eight times more abundant in lean subjects ., For taxa with several observations in each subject , we found good concordance between our results ( p-value estimates ) and those obtained with most of the other methods ( Table 2 ) ., Surprisingly , we found that the negative binomial model of Robinson and Smyth failed to detect several strongly differentially abundant features in these datasets ( e . g the hypothesis test for Firmicutes results in a p-value of 0 . 87 ) ., This may be due in part to difficulties in estimating the parameters of their model for our datasets and further strengthens the case for the design of methods specifically tuned to the characteristics of metagenomic data ., For cases where a particular taxon had no observations in one population ( e . g . Terasakiella ) , the methods proposed for SAGE data seem to perform poorly ., Targeted sequencing of the 16S rRNA can only provide an overview of the diversity within a microbial community but cannot provide any information about the functional roles of members of this community ., Random shotgun sequencing of environments can provide a glimpse at the functional complexity encoded in the genes of organisms within the environment ., One method for defining the functional capacity of an environment is to map shotgun sequences to homologous sequences with known function ., This strategy was used by Kurokawa et al . 34 to identify clusters of orthologous groups ( COGs ) in the gut microbiomes of 13 individuals , including four unweaned infants ., We examined the COGs determined by this study across all subjects and used Metastats to discover differentially abundant COGs between infants and mature ( >1 year old ) gut microbiomes ., This is the first direct comparison of these two populations as the original study only compared each population to a reference database to find enriched gene sets ., Due to the high number of features ( 3868 COGs ) tested for this dataset and the limited number of infant subjects available , our method used the pooling option to compute p-values ( we chose 100 permutations ) , and subsequently computed q-values for each feature ., Using a threshold of Q\u22640 . 05 ( controlling the false discovery rate to 5% ) , we detected 192 COGs that were differentially abundant between these two populations ( see Table 3 for a lisitng of the most abundant COGs in both mature and infant microbiomes . Full results are presented as supplementary material in Table S1 ) ., The most abundant enriched COGs in mature subjects included signal transduction histidine kinase ( COG0642 ) , outer membrane receptor proteins , such as Fe transport ( COG1629 ) , and Beta-galactosidase\/beta-glucuronidase ( COG3250 ) ., These COGs were also quite abundant in infants , but depleted relative to mature subjects ., Infants maintained enriched COGs related to sugar transport systems ( COG1129 ) and transcriptional regulation ( COG1475 ) ., This over-abundance of sugar transport functions was also found in the original study , strengthening the hypothesis that the unweaned infant gut microbiome is specifically designed for the digestion of simple sugars found in breast milk ., Similarly , the depletion of Fe transport proteins in infants may be associated with the low concentration of iron in breast milk relative to cows milk 38 ., Despite this low concentration , infant absorption of iron from breast milk is remarkably high , and becomes poorer when infants are weaned , indicating an alternative mechanism for uptake of this mineral ., The potential for a different mechanism is supported by the detection of a Ferredoxin-like protein ( COG2440 ) that was 11 times more abundant in infants than in mature subjects , while Ferredoxin ( COG1145 ) was significantly enriched in mature subjects ., A recent study by Dinsdale et al . profiled 87 different metagenomic shotgun samples ( \u223c15 million sequences ) using the SEED platform ( http:\/\/www . theseed . org ) 6 to see if biogeochemical conditions correlate with metagenome characteristics ., We obtained functional profiles from 45 microbial and 40 viral metagenomes analyzed in this study ., Within the 26 subsystems ( abstract functional roles ) analyzed in the Dinsdale et al . study , we found 13 to be significantly different ( P\u22640 . 05 ) between the microbial and viral samples ( Table 4 ) ., Subsystems for RNA and DNA metabolism were significantly more abundant in viral metagenomes , while nitrogen metabolism , membrane transport , and carbohydrates were all enriched in microbial communities ., The high levels of RNA and DNA metabolism in viral metagenomes illustrate their need for a self-sufficient source of nucleotides ., Though the differences described by the original study did not include estimates of significance , our results largely agreed with the authors qualitative conclusions ., However , due to the continuously updated annotations in the SEED database since the initial publication , we found several differences between our results and those originally reported ., In particular we found virulence subsystems to be less abundant overall than previously reported , and could not find any significant differences in their abundance between the microbial and viral metagenomes ., We have presented a statistical method for handling frequency data to detect differentially abundant features between two populations ., This method can be applied to the analysis of any count data generated through molecular methods , including random shotgun sequencing of environmental samples , targeted sequencing of specific genes in a metagenomic sample , digital gene expression surveys ( e . g . SAGE 29 ) , or even whole-genome shotgun data ( e . g . comparing the depth of sequencing coverage across assembled genes ) ., Comparisons on both simulated and real dataset indicate that the performance of our software is comparable to other statistical approaches when applied to well- sampled datasets , and outperforms these methods on sparse data ., Our method can also be generalized to experiments with more than two populations by substituting the t-test with a one-way ANOVA test ., Furthermore , if only a single sample from each treatment is available , a chi-squared test can be used instead of the t-test ., 27 ., In the coming years metagenomic studies will increasingly be applied in a clinical setting , requiring new algorithms and software tools to be developed that can exploit data from hundreds to thousands of patients ., The methods described above represent an initial step in this direction by providing a robust and rigorous statistical method for identifying organisms and other features whose differential abundance correlates with disease ., These methods , associated source code , and a web interface to our tools are freely available at http:\/\/metastats . cbcb . umd . edu .","headings":"Introduction, Materials and Methods, Results, Discussion","abstract":"Numerous studies are currently underway to characterize the microbial communities inhabiting our world ., These studies aim to dramatically expand our understanding of the microbial biosphere and , more importantly , hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora ., An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them ., We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data ( e . g . as obtained through sequencing ) to detect differentially abundant features ., Our method , Metastats , employs the false discovery rate to improve specificity in high-complexity environments , and separately handles sparsely-sampled features using Fishers exact test ., Under a variety of simulations , we show that Metastats performs well compared to previously used methods , and significantly outperforms other methods for features with sparse counts ., We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes , COG functional profiles of infant and mature gut microbiomes , and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes ., The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study ., For the COG and subsystem datasets , we provide the first statistically rigorous assessment of the differences between these populations ., The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects ., Our methods are robust across datasets of varied complexity and sampling level ., While designed for metagenomic applications , our software can also be applied to digital gene expression studies ( e . g . SAGE ) ., A web server implementation of our methods and freely available source code can be found at http:\/\/metastats . cbcb . umd . edu\/ .","summary":"The emerging field of metagenomics aims to understand the structure and function of microbial communities solely through DNA analysis ., Current metagenomics studies comparing communities resemble large-scale clinical trials with multiple subjects from two general populations ( e . g . sick and healthy ) ., To improve analyses of this type of experimental data , we developed a statistical methodology for detecting differentially abundant features between microbial communities , that is , features that are enriched or depleted in one population versus another ., We show our methods are applicable to various metagenomic data ranging from taxonomic information to functional annotations ., We also provide an assessment of taxonomic differences in gut microbiota between lean and obese humans , as well as differences between the functional capacities of mature and infant gut microbiomes , and those of microbial and viral metagenomes ., Our methods are the first to statistically address differential abundance in comparative metagenomics studies with multiple subjects , and we hope will give researchers a more complete picture of how exactly two environments differ .","keywords":"computational biology\/metagenomics","toc":null} +{"Unnamed: 0":2388,"id":"journal.pcbi.1003549","year":2014,"title":"Within-Host Bacterial Diversity Hinders Accurate Reconstruction of Transmission Networks from Genomic Distance Data","sections":"A bacterial population of size , which is initially genetically homogeneous , diversifies over time due to the random introduction of mutations at rate per genome per generation ., While there are many measures of diversity , we consider the expected pairwise genetic distance ( eg ., number of single nucleotide polymorphisms ( SNPs ) ) observed when sampling two random isolates from the population;where is the genetic distance between variants and , whose respective frequencies are ., Under neutral assumptions , the expected pairwise SNP distance at equilibrium is 9 , where is the effective population size , and is the mutation rate ., However , equilibrium dynamics cannot typically be assumed for within-host carriage of a bacterial pathogen ., An initially clonal population takes a considerable amount of time to reach equilibrium levels of diversity ( Figure S1 ) ., Evidence has recently emerged that in some pathogens within-host genetic diversity is common ., In principle , an individual may harbor a diverse pathogen population due to one or more of the following: infection with a diverse inoculum , diversification of the population due to mutation or other genetic change during infection , and multiple infections from different sources ., Studies of Staphylococcus aureus have revealed carriage of multiple sequence types , likely caused by independent transmission events 10 , 11 , as well as diversification over time in long-term carriers 12 , 13 , and the coexistence of several genotypes , differing by several SNPs 14 , 15 ., Streptococcus pneumoniae populations in an individual may harbor genetically divergent lineages , as has long been appreciated 16 ., Within-host diversity of other bacterial pathogens has been studied less frequently , although there is some evidence for heterogeneous carriage of Helicobacter pylori 17 , Pseudomonas aeruginosa 18 , Burkholderia dolosa 19 and Klebsiella pneumoniae 20 ., A transmission event involves passing a sample ( inoculum ) of bacteria from a carrier to a susceptible individual ., This is an example of a population bottleneck , as a small fraction of the original population is allowed to independently grow and mutate in a new environment ., Assuming the inoculum is a random sample of size greater than 1 , it can be shown that the expected sample diversity is equal to that of the original population regardless of the size of the bottleneck ( see Supporting Information ) ., However , the variance of the expected diversity is inversely proportional to the size of the bottleneck ( Figure S2 ) , demonstrating that small bottlenecks may generate considerably different levels of diversity in the recipient due to stochastic effects ., Estimating the bottleneck size associated with transmission is challenging , not least because estimates of pathogen diversity pre- and post-bottleneck will be based on a finite sample , and will themselves be uncertain ., A wide bottleneck has previously been implicated in the transmission of equine 21 and avian 22 influenza , while inoculum size for bacterial pathogens may vary dramatically 23 ., There have been several studies aiming to reconstruct transmission links using genetic data ., Many have relied on a phylogenetic reconstruction of available isolates , under the assumption that the transmission network will be topologically similar to the estimated phylogeny 5 , 8 , 24 , 25 , 26 ., However , the phylogenetic tree will not generally correspond to the transmission network based on samples collected during an outbreak 27 , 28 , 29 ., Furthermore , within-host diversity and heterogeneous transmission \u2013 the transmission of a genetically heterogeneous inoculum to a new host \u2013 will typically complicate such an approach , as isolates from one individual may potentially be interspersed within the same clade as those from other carriers ., Under certain assumptions , the molecular clock can be used to dictate the plausibility of a transmission event ., As the estimated time to the most recent common ancestor ( TMRCA ) between isolates sampled from two carriers gets further from the estimated time of infection , the probability of direct transmission falls , and a cutoff can be specified , beyond which transmission is deemed impossible ( eg . 30 ) ., This approach requires homogeneous transmission and a robust estimate of the mutation rate ., Other network reconstruction approaches have used weighted graph optimization 4 , as well as Markov chain Monte Carlo ( MCMC ) algorithms to sample over all possible transmission links 6 , 7 ., Several variables may affect the outcome of such analyses ., Firstly , the method and frequency of sampling is of great importance ., Taking one sample per case ignores within-host diversity and could lead to poor estimates of the genetic distance between cases ., Asymptomatic infections may not be detected , or may only be detected long after the time of infection \u2013 this can lead to greater uncertainty in the estimated network ., Secondly , the bottleneck size plays a crucial role in the amount of diversity established in the newly infected host ., Thirdly , the infectious period affects the degree of diversity that may accumulate within-host , and therefore gets passed on to susceptible individuals ., Using phylogenetic reconstruction as a means to estimate transmission is often inappropriate 29 , and even when combined with additional analytical methods designed to infer transmission , produces highly uncertain networks 31 ., Furthermore , such methodology typically cannot account for diverse founding populations ., We instead used a genetic distance-based approach to determine how informative genomic data can be when used to estimate routes of transmission ., Many methods aiming to reconstruct either phylogenetic trees or transmission networks are based on a function of a pairwise genetic distance matrix ., These include graph optimization 4 , the MCMC sampling approaches 6 , 7 , and various tree reconstruction methods ( eg ., neighbor joining , unweighted pair group method with arithmetic mean ( UPGMA ) , minimum spanning tree ) ., As such , we used a generalized weighting function based on genetic distance to reconstruct networks , in order to provide a framework flexible enough to be similar ( or , in some cases , equivalent ) to these methods ., We investigated how accurately transmission networks could be recovered , and how accuracy was affected by factors such as bottleneck size , transmission rate and mutation rate ., We simulated disease outbreaks under a variety of scenarios , and reflecting various sampling strategies ., Our approach could accommodate within-host diversity and variable bottleneck sizes , in order to investigate their effect on network reconstruction ., Full details are given in Materials and Methods ., We first simulated diversification within a single host , using S . aureus as an example , and compared our findings with estimates of diversity based on published samples ., The expected genetic pairwise distance for S . aureus carriage has been estimated at 4 . 12 SNPs 15 ., S . aureus has a mutation rate of approximately 5\u00d710\u22124 per genome per bacterial generation ( given a rate of 3\u00d710\u22126 per nucleotide per year 1 , 12 and a generation time of 30 minutes 32 , 33 , 34 ) ., Nasal carriage of S . aureus has been estimated to have an effective population size in the range 50\u20134000 12 , 15 ., Figure 1 shows the accumulation of diversity over time under these parameters ., Our simulations indicate that if we assume a host acquires a homogeneous transmission , the expected colonization period required for previously observed levels of diversity to emerge under neutral evolution is typically long ( \u223c1 year ) ., While S . aureus may be carried for a number of years 35 , observing high diversity from recently infected individuals suggests that alternative explanations may be more realistic ., First , repeated exposure to infection may result in the introduction of new strains to a host , potentially resulting in rapid establishment of diversity ., Second , the transmitted inoculum may not be a single genotype , but rather a sample of genotypes from the source ., This was investigated in detail in the next simulation experiments ., We assessed the effect of bottleneck size in a disease outbreak by firstly considering a simple transmission chain , where each infected individual transmits to exactly one susceptible individual ., We considered an initial bacterial population of 10 genotypes , which had an expected pairwise distance of 5 SNPs , which could represent a long-term carrier , or the recipient of a diverse infection ., We then simulated a transmission event by selecting an inoculum of size ., We allowed the new founding population to reach equilibrium population size and imposed another bottleneck after 1000 generations ., We repeated this process for 25 transmission events ., Figure S3 shows six realizations of our simulations under different values of ., Clearly , while diversity rapidly drops away for small bottlenecks , larger sizes ( >10 cells ) allow diversity to persist for several bottlenecks ., With sufficient mutation between transmission events , diversity can be maintained ( Figure S4 ) ., If bacterial specimens taken from disease carriers in an outbreak are sequenced , we can attempt to estimate the routes of transmission based on the genetic similarity of the isolates ., There are a number of additional factors that may inform our estimate of the transmission route , such as location , contact patterns and exposure time , but we examined the information to be gained from sequence data alone ., More than one isolate may be taken from a carrier , sampled either simultaneously or at various time points during infection , necessitating a choice of how to describe the genetic distance between populations of isolates from two cases ., We considered both the mean pairwise distance and the centroid distance to summarize the genetic distance between groups of isolates , but found that both resulted in very similar network reconstructions ., Network edges were given a weighting which we assume is inversely proportional to the genetic distance ( see Materials and Methods for detailed specification of weighting functions ) ., The single transmission chain provides an idealized scenario to reconstruct transmission links ., Furthermore , we assumed that the order of infection is known ., As such , the potential source for each individual can only be one of the preceding generations , which , intuitively at least , should become more genetically distant as one goes farther back in time ., Transmission events occur every 1000 bacterial generations , and one cell is selected randomly from each individuals bacterial carriage at regular intervals ( possibly more frequent than the transmission process ) for sequencing ., Figure 2 shows reconstructed networks for a range of scenarios ., We repeated this for several simulations under each scenario , and plotted receiver-operating characteristic ( ROC ) curves to assess the accuracy of the reconstructed network ( Figure 3 ) ., We observed that there was an optimal bottleneck size in this setting which allows the network to be resolved with a relatively high level of accuracy; for the scenario considered here , networks reconstructed using a bottleneck size of 10 clearly outperform those constructed using both larger and smaller inoculum sizes ., In this setting , larger bottlenecks allow a very similar bacterial population to be established within each new infective , while smaller bottlenecks rapidly result in a single dominant strain being carried and transmitted by the infected population ., The optimal bottleneck size depends on the outbreak size , as well as the expected change in pathogen diversity within-host between time of infection and onward transmission ., We found that infrequent sampling ( eg . one sample per infected individual ) can lead to a reconstruction that is no better than selecting sources at random , and sometimes worse ., We next considered a more general susceptible-infectious-removed ( SIR ) epidemic , in order to determine how network accuracy is affected by transmission and mutation rate , and sampling strategy ., We again estimated the transmission network based upon observed sequence data alone under the assumption that the order of infection was known ., Both the centroid and pairwise distance metrics were used , but we found that the performance of both was very similar ., For this reason , all results shown here have been derived using the pairwise distance measure ., We simulated epidemics under a variety of scenarios and found that generally for larger outbreaks , such that several infective individuals were present at any one time , the power to determine the routes of transmission was low ., We supposed that we did not know the infection or removal times , only observing the correct order of infection ., Table S1 gives area under the ROC curve values for estimated networks based on a selection of simulated datasets ., In many cases , particularly for higher rates of infection and removal , we found that the ROC curve indicated no improvement on guessing transmission sources at random ., However , we saw that distinct groups of individuals , representing large branches of the transmission network , may be distinguished from one another , indicating that gross features of the transmission network may be determined ., Figure 4A shows a simulated epidemic in which nodes are colored according to their observed mean distance from the origin ., Clearly later infections can be discriminated from cases further in the past , but a great deal of uncertainty exists among contemporary cases ., Network reconstruction was more successful in scenarios where higher diversity could be established between host and recipient ., As such , network reconstruction improved for long carriage times , low transmission rates , and high mutation rates ( Table S1 ) ., Network entropy may be used to evaluate the uncertainty arising under the network reconstruction approach ( see Materials and Methods ) ., As the outbreak progresses , the entropy of most nodes increases and is only modestly lower than that obtained from assigning an even probability to all preceding cases ( Figure 4B ) ., However , certain nodes are markedly less uncertain than the surrounding ones , indicating that for them , incorporating genetic distance considerably reduces the uncertainty of who infected them ., In this outbreak , for example , the entropy distribution is bimodal , with 99 of the 112 nodes having entropy within one bit of random guessing ., In Figure 4 , the infector of each node was identified with probability proportional to the inverse of the genetic distance between the populations , guaranteeing that some positive probability is assigned to the true infector ., Entropy may be reduced ( possibly at the expense of lowering the estimated probability of infection by the true infector ) by increasing the relative probability of infection from nodes that are genetically close ., Importance of similar nodes can be increased up to the point at which the closest node is selected with certainty , and the maximum directed spanning tree is selected , ( equivalent to the SeqTrack method of network reconstruction 4 ) , resulting in zero entropy ., Figure 5 shows the same network estimated with a varying importance factor ., While some correct edges are estimated with a higher probability , several false connections are also estimated with little uncertainty ., Precision is increased often at the expense of accuracy , and indeed increasing the importance factor for this network reduces the area under the ROC curve ., Table S2 gives values for the area under the ROC curve for estimated networks under a particular simulated dataset , showing how accuracy declines as closer nodes are weighted more heavily ., The true parent of a node has no guarantee of being the closest node , but is likely to belong to a group of genetically similar potential sources ., Sampling strategies play an important role in the accuracy of the estimated network \u2013 while it is unsurprising that more frequent sampling results in reduced uncertainty , it is notable that even with perfect sampling , the uncertainty typically remains much too large to identify individual transmission routes ., Figure 6 shows the same simulated outbreak , colored according to two different sampling strategies; firstly sequencing one isolate from each individual every 1000 bacterial generations , and secondly sequencing isolates ten times at each time point ., In each plot , an arbitrarily chosen reference node is marked , to which each other node is compared ., The second plot shows that the \u2018neighborhood\u2019 , to which the reference node and its true source belong , may be discerned , genetically distinct from the rest of the outbreak ., Increasing sampling frequency beyond this level does not considerably improve discrimination ., Selecting a single isolate per individual typically leads to a poor estimation of the transmission network ., We found that the initial genotype often persisted throughout an epidemic , and remained the dominant genotype for a large number of infected individuals ., Selecting a single isolate from each infective would result in a large number of individuals with an apparently genetically identical infection , providing little information about transmission ., Multiple samples can reveal minor genotypes , which may be more informative ., We found that in most reasonable settings , the reconstructed network based on single isolates was uncertain and inaccurate , sometimes worse than a random network ., Our work suggests that under a range of plausible scenarios considered here , it is not possible to determine transmission routes based upon sampled bacterial genetic distance data alone ., For every infected individual in a large outbreak , there are several other individuals harboring a similar pathogen population who may be the true source of infection ., Existing distance-based methods typically assume that a single isolate is obtained from each host , in which case the distance between hosts is simply the number of SNPs separating the two isolates ., Sampling only one isolate per case can lead to poor estimates of genetic distance between individuals , and therefore inaccurate identification of transmission routes , often little better than assigning links at random ., Increasing the sampling to obtain more than one sample per host may partially alleviate this problem; in this case , the genetic distance between two hosts may be estimated as the mean distance between isolates from one host and isolates from the other ., The amount of sampling required depends on what one hopes to gain from the sequence data ., Single isolates may be sufficient to rule out infection sources for individuals , based on large observed genetic distances ., Repeated sampling may be used to identify clusters of infected individuals who host very similar bacterial populations , and therefore are likely to be close neighbors in the transmission network ., This allows us to investigate more general trends in the progression of the outbreak , eg ., spread between communities or countries , while individual events remain obscure ., A considerable degree of diversity is transmitted with even a small inoculum from the source , under the assumption that the inoculum is sampled randomly from the pathogen population infecting the source ., We believe that this highlights the importance of establishing the degree of within-host diversity through multiple samples before attempting to infer transmission routes ., Such sampling will also further our understanding of the transmission bottleneck for bacterial pathogens , as well as the effective population size ., Many of the parameters in our simulations are difficult to estimate for bacteria in vivo , and as such , few estimates exist ., Moreover , population structure within a host may lead to divergence between the census and effective population sizes in each host 36 ., To obtain results that would be widely applicable in spite of these uncertainties , we simulated transmission and carriage under a wide range of plausible parameter values for bacterial pathogens ., Bottleneck size is a key factor in the onward transmission of diversity and network recovery \u2013 too small and resulting infections are homogeneous , too large and recipients share the same genotype distribution as the source ., In our inference of transmission routes , we have measured the average genetic distance between individuals across the span of the infectious period ., If the removal rate is sufficiently low relative to the mutation rate , the genetic makeup of the pathogen population in an individual will vary considerably over time ., As such , while a source and recipient may be genetically similar at the time of infection , the mean distance between observed samples may be higher ., It may be possible to either restrict or weight the range of samples used in order to gauge the distribution of genotypes at a particular time; however , this comes at the expense of excluding potentially useful data ., Using the mean genetic distance is not unreasonable if the length of carriage is small compared to the time required to accumulate significant diversity ., We have considered different sampling strategies , but have supposed that a large coverage of the infected population can be achieved ., This may be reasonable for an outbreak in a small community , but inevitably , there may still be some missing links , especially when asymptomatic carriage could go undetected ., Furthermore , we assumed that the order of infections is known ., We have demonstrated that the reconstructed network accuracy is typically poor , even in the best-case scenario of near perfect observation ., We did not consider the possibility of repeated infectious contact , leading to infection from multiple sources ., This could serve to increase the diversity within-host , further complicating the inference of transmission routes ., In many settings , it is reasonable to assume that infectious individuals may come into contact with each other , and potentially transmit ., In the case of vector-borne diseases , the vector ( eg . a healthcare worker in nosocomial S . aureus transmission ) may transiently carry multiple strains collected from one or more carriers , and pass this diversity on to recipients ., If is the rate at which a novel SNP is introduced via reinfection , then the equilibrium level of diversity is increased to ., If the type ( s ) introduced upon reinfection are sufficiently dissimilar to the existing population , it may be possible to infer reinfection events ., However , if the rate of infectious contact is high , most bacterial populations may contain artifacts from several disparate sources , preventing any kind of transmission analysis ., The ability to reconstruct transmission networks is dependent on both data and methodological limitations ., While we cannot rule out the possibility of alternative methods using genetic distance data to provide superior network reconstructions , the framework we use here is flexible enough to investigate a range of relationships between genetic distance and transmission , under the widely used assumption that individuals hosting genetically similar pathogens are more likely to have been involved in a transmission event than those infected by more distantly related organisms ., In this study , we have made a number of assumptions ., Firstly , we have used a discrete model of bacterial growth in which cells simultaneously divide and die at generational intervals ., We have specified that a cell must divide or die at each generation , such that persisting without reproduction is not possible ., Under this model , the effective population size is equal to the actual population size - incorporating cell survival without reproduction would only serve to reduce the effective population size , and therefore , the accumulation of diversity ., Secondly , we have assumed neutral evolution; that is , there is no fitness advantage or cost associated with any mutation ., Selection is likely to decrease the amount of instantaneous diversity within a population ., The emergence of fitter mutations is likely to reduce the expected diversity , since fitter strains are more likely to tend towards fixation , eliminating weaker variants and their associated diversity ., However , the effect of selective sweeps over time could increase the observed diversity in a longitudinal sample ., Thirdly , we have assumed that an inoculum is composed of a random sample of bacteria from the entire colony ., If the inoculum is not a random sample , the degree of diversity that is transmitted upon infectious contact may be much smaller ., The suitability of this assumption may vary depending on the mode of transmission ., However , we could consider the bottleneck size used here to represent the effective population size of the inoculum , rather than the true size ., Finally , we have ignored the possibility of recombination ., Further work would be required to explore the effect of each of these aspects in detail ., The observation of rare variants in cross-sectional samples from individual hosts may offer an alternative approach to identifying the transmission network ., Each observation of a particular genotype must arise from a shared ancestor , assuming homoplasy is not possible ., With perfect sampling , a genotype carried by only two individuals under these conditions indicates a transmission event between the pair ., However , many isolates would need to be sequenced to detect such variation which is by definition rare ., Such sampling is typically infeasible via standard genome sequencing , although deep sequencing may reveal uncommon SNPs , suggesting transmission between carriers ., Metagenomic sampling may potentially be of great use in such an approach ., Furthermore , such sampling may provide significant practical and financial advantages over collection and sequencing several individual samples ., Future work may be conducted to investigate the performance of such an approach under a variety of scenarios , for viral as well as bacterial pathogens ., It may be possible to develop a genetic distance threshold such that any observed pair of isolates exceeding this value are deemed , to a given level of confidence , not to have arisen from directly linked cases ., Such a threshold will depend on the bottleneck size , effective population size and mutation rate ., As yet , no such limit has been justified theoretically , and appropriate data to investigate this are lacking ., This work highlights the need to better understand bacterial carriage and transmission at a cellular and molecular level ., As yet , few studies have sequenced repeated samples from infected people , so the scale of within-host diversity is still unclear ., Furthermore , key parameters such as effective population size and inoculum size are either highly uncertain or unknown for bacterial pathogens ., If feasible , we recommend multiple isolates be sequenced per individual when collecting data to assess transmission routes ., While our work casts some doubt on the use of bacterial sequence data to identify individual transmission routes , there is certainly still much scope for its use in the analysis of disease transmission dynamics ., Uncovering clusters of genetically similar isolates can be greatly informative for the spread of a disease between various subpopulations , such as households , schools and hospitals ., By combining genomic data with additional information , such as estimated infection and removal times , contact patterns , social groups and geographic location , it may be possible to narrow the pool of potential sources down considerably ., Genomic data and traditional \u2018shoe-leather epidemiology\u2019 methods may complement each other; each eliminating links that the other cannot rule out ., Our simulation studies were based around a discrete-time bacterial fission model ., We supposed that bacteria cells died at random with probability , where is the bacterial population in the previous generation , and is the equilibrium population ., The remaining cells divided , creating a mutant daughter cell with probability , otherwise creating a genetically identical copy of the parent cell ., Mutations introduced one nucleotide substitution at a random position in the genome , such that the genetic distance from parent to mutant was always one SNP ., Neutral evolution is assumed ., Under this model , the effective population size is equal to the size of the population; that is , 37 ., In the event of an infectious contact , an inoculum of size was separated from the original population , and allowed to grow and diversify independently ., The inoculum was assumed to be a random sample from the original population ., In the epidemic simulations , we used a standard SIR model , in which each susceptible individual is exposed to an infection rate of at time , where is the proportion of infected individuals at time ., Infected individuals are then removed ( through recovery or death ) at a rate ., As we operated in a discrete-time framework , we used Poisson approximations to generate times of infection ., For generation , a given susceptible individual avoids infection with probability ., An individual infected in generation may transmit to another individual from generation onwards ., The source of a new infection is chosen uniformly at random from the pool of current infectives ., We assumed that the order of infection was known , and that all infective individuals were observed ., Failure to identify routes of infection under these optimal conditions would provide little confidence that this could be achieved in a real world setting , where such information is rarely available ., The relationships between isolates may be considered either directly from the sequence data , or from a matrix of observed genetic distances ., The former category encompasses methods explicitly considering the evolutionary process , such as maximum likelihood and parsimony tree construction ., Neighbor joining , UPGMA , minimum spanning tree construction and SeqTrack all belong to the latter ., In this study , we were primarily interested in the relationship between individuals , rather than between bacterial specimens , and as such , did not adopt a phylogenetic approach ., We instead weighted network edges according to the genetic distance matrix , supposing that the likelihood of direct infection having occurred was inversely related to the genetic distance ., Given the infective population is fully observed , a function may be defined to provide weight to each potential network edge ., We assume this weight is inversely related to the genetic distance between the two nodes ., This distance may be specified in various ways \u2013 here , we consider the mean genetic pairwise distance and the distance between the centroid of each group ., Let denote the set of sequences observed from individual , then the mean genetic distance between and can be given asAlternatively , let be the proportion of samples in with a nucleotide at locus ., The distance between the centroids of and can be defined aswhere is the genome length , and returns the absolute value ., Unlike the pairwise distance , the centroid distance has the desirable property that for all ; however , the converse is not true ., We calculate the relative probability that a particular transmission event occurred by considering the inverse of the chosen distance function ;then we can define our weighting function aswhere is a constant to determine the relative probability of a connection between individuals with identical genotype distributions , and is a proximity factor by which the importance of close connections may be","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"The prospect of using whole genome sequence data to investigate bacterial disease outbreaks has been keenly anticipated in many quarters , and the large-scale collection and sequencing of isolates from cases is becoming increasingly feasible ., While sequence data can provide many important insights into disease spread and pathogen adaptation , it remains unclear how successfully they may be used to estimate individual routes of transmission ., Several studies have attempted to reconstruct transmission routes using genomic data; however , these have typically relied upon restrictive assumptions , such as a shared topology of the phylogenetic tree and a lack of within-host diversity ., In this study , we investigated the potential for bacterial genomic data to inform transmission network reconstruction ., We used simulation models to investigate the origins , persistence and onward transmission of genetic diversity , and examined the impact of such diversity on our estimation of the epidemiological relationship between carriers ., We used a flexible distance-based metric to provide a weighted transmission network , and used receiver-operating characteristic ( ROC ) curves and network entropy to assess the accuracy and uncertainty of the inferred structure ., Our results suggest that sequencing a single isolate from each case is inadequate in the presence of within-host diversity , and is likely to result in misleading interpretations of transmission dynamics \u2013 under many plausible conditions , this may be little better than selecting transmission links at random ., Sampling more frequently improves accuracy , but much uncertainty remains , even if all genotypes are observed ., While it is possible to discriminate between clusters of carriers , individual transmission routes cannot be resolved by sequence data alone ., Our study demonstrates that bacterial genomic distance data alone provide only limited information on person-to-person transmission dynamics .","summary":"With the advent of affordable large-scale genome sequencing for bacterial pathogens , there is much interest in using such data to identify who infected whom in a disease outbreak ., Many methods exist to reconstruct the phylogeny of sampled bacteria , but the resulting tree does not necessarily share the same structure as the transmission tree linking infected persons ., We explored the potential of sampled genomic data to inform the transmission tree , measuring the accuracy and precision of estimated networks based on simulated data ., We demonstrated that failing to account for within-host diversity can lead to poor network reconstructions - even with repeated sampling of each carrier , there is still much uncertainty in the estimated structure ., While it may be possible to identify clusters of potential sources , identifying individual transmission links is not possible using bacterial sequence data alone ., This work highlights potential limitations of genomic data to investigate transmission dynamics , lending support to methods unifying all available data sources .","keywords":"ecological metrics, mutation, population size, ecology, effective population size, population modeling, evolutionary modeling, genetics, biology and life sciences, population genetics, infectious disease modeling, computational biology, evolutionary biology","toc":null} +{"Unnamed: 0":2005,"id":"journal.pcbi.1003495","year":2014,"title":"Bidirectional Control of Absence Seizures by the Basal Ganglia: A Computational Evidence","sections":"Absence epilepsy is a generalized non-convulsive seizure disorder of the brain , mainly occurring in the childhood years 1 ., A typical attack of absence seizures is characterized by a brief loss of consciousness that starts and terminates abruptly , and meanwhile an electrophysiological hallmark , i . e . the bilaterally synchronous spike and wave discharges ( SWDs ) with a slow frequency at approximately 2\u20134 Hz , can be observed on the electroencephalogram ( EEG ) of patients 1 , 2 ., There is a broad consensus that the generation of SWDs during absence seizures is due to the abnormal interactions between cerebral cortex and thalamus , which together form the so-called corticothalamic system ., The direct evidence in support of this view is based on simultaneous recordings of cortex and thalamus from both rodent animal models and clinical patients 3\u20135 ., Recent computational modelling studies on this prominent brain disorder also approved the above viewpoint and provided more deep insights into the possible generation mechanism of SWDs in the corticothalamic system 6\u201313 ., The basal ganglia comprise a group of interconnected subcortical nucleus and , as a whole , represent one fundamental processing unit of the brain ., It has been reported that the basal ganglia are highly associated with a variety of brain functions and diseases , such as cognitive 14 , emotional functions 15 , motor control 16 , Parkinsons disease 17 , 18 , and epilepsy 19 , 20 ., Anatomically , the basal ganglia receive multiple projections from both the cerebral cortex and thalamus , and in turn send both direct and indirect output projections to the thalamus ., These connections enable the activities of the basal ganglia to influence the dynamics of the corticothalamic system ., Therefore , it is naturally expected that the basal ganglia may provide an active role in mediating between seizure and non-seizure states for absence epileptic patients ., Such hypothesis has been confirmed by both previous animal experiments 19 , 21\u201323 and recent human neuroimage data 20 , 24 , 25 ., Nevertheless , due to the complicated interactions between basal ganglia and thalamus , the underlying neural mechanisms on how the basal ganglia control the absence seizure activities are still remain unclear ., From the anatomical perspective , the substantia nigra pars reticulata ( SNr ) is one of the major output nucleus of the basal ganglia to thalamus ., Previous experimental studies using various rodent animal models have demonstrated that suitable changes in the firing of SNr neurons can modulate the occurrence of absence seizures 21\u201323 , 26 ., Specifically , it has been found that pharmacological inactivation of the SNr by injecting -aminobutyric acids ( GABA ) agonists or glutamate antagonists suppresses absence seizures 21 , 22 ., Such antiepileptic effect was supposed to be attributed to the overall inhibitory effect of the indirect pathway from the SNr to thalamic reticular nucleus ( TRN ) relaying at superior colliculus 21 , 22 ., In addition to this indirect inhibitory pathway , it is known that the SNr also contains GABAergic neurons directly projecting to the TRN and specific relay nuclei ( SRN ) of thalamus 27 , 28 ., Theoretically , changing the activation level of SNr may also significantly impact the firing activities of SRN and TRN neurons 28 , 29 ., This contribution might further interrupt the occurrence of SWDs in the corticothalamic system , thus providing an alternative mechanism to regulate typical absence seizure activities ., To our knowledge , however , so far the precise roles of these direct basal ganglia-thalamic pathways in controlling absence seizures are not completely established ., To address this question , we develop a realistic mean-field model for the basal ganglia-corticothalamic ( BGCT ) network in the present study ., Using various dynamic analysis techniques , we show that the absence seizures are controlled and modulated either by the isolated SNr-TRN pathway or the isolated SNr-SRN pathway ., Under suitable conditions , these two types of modulations are observed to coexist in the same network ., Importantly , in this coexist region , both low and high activation levels of SNr neurons can suppress the occurrence of SWDs due to the competition between these two direct inhibitory basal ganglia-thalamic pathways ., These findings clearly outline a bidirectional control of absence seizures by the basal ganglia , which is a novel phenomenon that has never been identified both in previous experimental and modelling studies ., Our results , on the one hand , further improve the understanding of the significant role of basal ganglia in controlling absence seizure activities , and on the other hand , provide testable hypotheses for future experimental studies ., We build a biophysically based model that describes the population dynamics of the BGCT network to investigate the possible roles of basal ganglia in the control of absence seizures ., The network framework of this model is inspired by recent modelling studies on Parkinsons disease 30 , 31 , which is shown schematically in Fig . 1 ., The network totally includes nine neural populations , which are indicated as follows: excitatory pyramidal neurons ( EPN ) ; inhibitory interneurons ( IIN ) ; TRN; SRN; striatal D1 neurons; striatal D2 neurons; SNr; globus pallidus external ( GPe ) segment; subthalamic nucleus ( STN ) ., Similar to other modelling studies 30\u201332 , we do not model the globus pallidus internal ( GPi ) segment independently but consider SNr and GPi as a single structure in the present study , because they are reported to have closely related inputs and outputs , as well as similarities in cytology and function ., Three types of neural projections are contained in the BGCT network ., For sake of clarity , we employ different line types and heads to distinguish them ( see Fig . 1 ) ., The red lines with arrow heads denote the excitatory projections mediated by glutamate , whereas the blue solid and dashed lines with round heads represent the inhibitory projections mediated by and , respectively ., It should be noted that in the present study the connections among different neural populations are mainly inspired by previous modelling studies 30 , 31 ., Additionally , we also add the connection sending from SNr to TRN in our model , because recent anatomical findings have provided evidence that the SNr also contains GABAergic neurons directly projecting to the TRN 27\u201329 ., The dynamics of neural populations are characterized by the mean-field model 9 , 11 , 33\u201335 , which was proposed to study the macroscopic dynamics of neural populations in a simple yet efficient way ., The first component of the mean-field model describes the average response of populations of neurons to changes in cell body potential ., For each neural population , the relationship between the mean firing rate and its corresponding mean membrane potential satisfies an increasing sigmoid function , given by ( 1 ) where indicate different neural populations , denotes the maximum firing rate , r represents the spatial position , is the mean firing threshold , and is the threshold variability of firing rate ., If exceeds the threshold , the neural population fires action potentials with an average firing rate ., It should be noted that the sigmoid shape of is physiologically crucial for this model , ensuring that the average firing rate cannot exceed the maximum firing rate ., The changes of the average membrane potential at the position r , under incoming postsynaptic potentials from other neurons , are modeled as 9 , 33\u201336 ( 2 ) ( 3 ) where is a differential operator representing the dendritic filtering of incoming signals ., and are the decay and rise times of cell-body response to incoming signals , respectively ., is the coupling strength between neural populations of type and type ., is the incoming pulse rate from the neural population of type to type ., For simplicity , we do not consider the transmission delay among most neural populations in the present work ., However , since the functions via second messenger processes , a delay parameter is introduced to its incoming pulse rate ( i . e . , ) to mimic its slow synaptic kinetics ., This results in a delay differential equation in the final mathematical description of the BGCT model ., Note that the similar modelling method has also been used in several previous studies 13 , 37 ., In our system , each neural population gives rise to a field of pulses , which travels to other neural population at a mean conduction velocity ., In the continuum limit , this type of propagation can be well-approximated by a damped wave equation 9 , 33\u201335 , 38: ( 4 ) Here is the Laplacian operator ( the second spatial derivative ) , is the characteristic range of axons of type , and governs the temporal damping rate of pulses ., In our model , only the axons of cortical excitatory pyramidal neurons are assumed to be sufficiently long to yield significant propagation effect ., For other neural populations , their axons are too short to support wave propagation on the relevant scales ., This gives ( ) ., Moreover , as one of typical generalized seizures , the dynamical activities of absence seizures are believed to occur simultaneously throughout the brain ., A reasonable simplification is therefore to assume that the spatial activities are uniform in our model , which has been shown as the least stable mode in models of this class 33 , 34 , 36 ., To this end , we ignore the spatial derivative and set in Eq ., ( 4 ) ., Accordingly , the propagation effect of cortical excitatory axonal field is finally given by 33 , 34 , 36: ( 5 ) where ., For the population of cortical inhibitory interneurons , the BGCT model can be further reduced by using and , which is based on the assumption that intracortical connectivities are proportional to the numbers of synapses involved 9 , 13 , 33\u201336 ., It has been demonstrated that by making these above reductions , the developed BGCT model becomes computationally more tractable without significant deteriorating the precision of numerical results ., We then rewrite above equations in the first-order form for all neural populations ., Following above assumptions , we use Eqs ., ( 1 ) \u2013 ( 3 ) and ( 5 ) for modelling the dynamics of excitatory pyramidal neurons , and Eqs ., ( 1 ) \u2013 ( 3 ) for modelling the dynamics of other neural populations ., This yields the final mathematical description of the BGCT model given as follows: ( 6 ) ( 7 ) ( 8 ) ( 9 ) where ( 10 ) ( 11 ) ( 12 ) In Eq ., ( 10 ) , the superscript T denotes transposition ., The detailed expression of for different neural populations is represented by , and , given by ( 13 ) with ( 14 ) ( 15 ) ( 16 ) Here the variable in Eq ., ( 15 ) denotes the delay and the parameter in Eq ., ( 16 ) represents the constant nonspecific subthalamic input onto SRN ., The parameters used in our BGCT model are compatible with physiological experiments and their values are adapted from previous studies 9 , 11 , 13 , 30 , 31 , 36 ., Unless otherwise noted , we use the default parameter values listed in Table 1 for numerical simulations ., Most of the default values of these parameters given in Table 1 are based on either their nominal values or parameter ranges reported in above literature ., A small number of parameters associated with the basal ganglia ( i . e . , , and ) are adjusted slightly , but still within their normal physiological ranges , to ensure our developed model can generate the stable 2\u20134 Hz SWDs under certain conditions ., Note that due to lack of quantitative data , the coupling strength of the SNr-TRN pathway needs to be estimated ., Considering that the SNr sends GABAergic projections both to SRN and TRN and also both of these two nuclei are involved in thalamus , it is reasonable to infer that the coupling strengths of these two pathways are comparable ., For simplicity , here we chose by default ., In the following studies , we also change ( decrease or increase ) the value of several folds by employing a scale factor ( see below ) to examine how the inhibition from the SNr-TRN pathway regulates absence seizures ., Additionally , during this study , several other critical parameters ( i . e . , , and ) are also varied within certain ranges to obtain different dynamical states and investigate their possible effects on the modulation of absence seizures ., In the present study , several data analysis methods are employed to quantitatively evaluate the dynamical states as well as the properties of SWDs generated by the model ., To reveal critical transitions between different dynamical states , we perform the bifurcation analysis for several key parameters of the model ., For one specific parameter , the bifurcation diagram is simply obtained by plotting the \u201cstable\u201d local minimum and maximum values of cortical excitatory axonal fields ( i . e . , ) over changes in this parameter 11 , 39 ., To this end , all simulations are executed for sufficiently long time ( 10 seconds of simulation time , after the system reaches its stable time series ) , and only the local minimum and maximum values obtained from the latter stable time series are used ., Using the above bifurcation analysis , we can also easily distinguish different dynamical states for combined parameters ., Such analysis technique allows us to further identify different dynamical state regions in the two-parameter space ( for example , see Fig . 2D ) ., On the other hand , the power spectral analysis is used to estimate the dominant frequency of neural oscillations ., To do this , the power spectral density is obtained from the time series ( over a period of 10 seconds ) by using the fast Fourier transform ., Then , the maximum peak frequency is defined as the dominant frequency of neural oscillations ., It should be noted that , by combining the results of both the state and frequency analysis , we can outline the SWD oscillation region that falls into the 2\u20134 Hz frequency range in the two-parameter space ( for example , see the asterisk region in Fig . 2E ) ., Moreover , we calculate the mean firing rates ( MFRs ) for several key neural populations in some figures ., To compute the MFRs , all corresponding simulations are performed up to 25 seconds and the data from 5 to 25 seconds are used for statistical analysis ., To obtain convincing results , we carry out 20 independent simulations with different random seeds for each experimental setting , and report the averaged result as the final result ., Finally , in some cases , we also compute the low and high triggering mean firing rates ( TMFRs ) for SNr neurons ., In the following simulations , we find that the mean firing rate of SNr neurons is increased with the growth of the excitatory coupling strength , which serves as a control parameter to modulate the activation level of SNr in our work ( see the Results section ) ., Based on this property , the low and high TMFRs can be determined by the mean firing rates of SNr neurons occurring at the boundaries of the typical region of 2\u20134 Hz SWDs ( for example , see the black dashed lines in Fig . 3B ) ., All network simulations are written and performed under the MATLAB environment ., The aforementioned dynamical equations are integrated by using the standard fourth-order Runge-Kutta method , with a fixed temporal resolution of 40 ., In additional simulations , it turns out that the chosen integration step is sufficiently small to ensure the numerical accuracy of our developed BGCT model ., The computer codes used in the present study are available on ModelDB ( https:\/\/senselab . med . yale . edu\/ModelDB\/showmodel . asp ? model=152113 ) ., The fundamental implementation of the BGCT model is provided as supplementary information to this paper ( we also provide a XPPAUT code for comparison 41; see Text S1 and S2 ) ., Previous studies have suggested that the slow kinetics of receptors in TRN are a candidate pathological factor contributing to the generation of absence seizures both in animal experiments and biophysical models of corticothalamic network 7 , 13 , 42 , 43 ., To explore whether this mechanism also applies to the developed BGCT model , we perform one-dimensional bifurcation analysis for the inhibitory coupling strength and the delay parameter , respectively ., The corresponding bifurcation diagrams and typical time series of are depicted in Figs ., 2A\u20132C , which reveal that different dynamical sates emerge in our system for different values of and ., When the coupling strength is too weak , the inhibition from TRN cannot effectively suppress the firing of SRN ., In this case , due to the strong excitation from pyramidal neurons , the firing of SRN rapidly reaches a high level after the beginning of the simulation ., Such high activation level of SRN in turn drives the firing of cortical neurons to their saturation states within one or two oscillation periods ( region I ) ., As the coupling strength grows , the inhibition from TRN starts to affect the firing of SRN ., For sufficiently long , this causes our model to successively undergo two different oscillation patterns ., The first one is the SWD oscillation pattern , in which multiple pairs of maximum and minimum values are found within each periodic complex ( region II ) ., Note that this oscillation pattern has been extensively observed on the EEG recordings of real patients during absence seizures 1 ., The other one is the simple oscillation pattern , in which only one pair of maximum and minimum values appears within each periodic complex ( region III ) ., However , if the coupling strength is too strong , the firing of SRN is almost completely inhibited by TRN ., In this situation , the model is kicked into the low firing region and no oscillation behavior can be observed anymore ( region IV ) ., Additionally , we also find that the model dynamics are significantly influenced by the delay , and only sufficiently long can ensure the generation of SWDs in the developed model ( see Fig . 2B ) ., To check whether our results can be generalized within a certain range of parameters , we further carry out the two-dimensional state analysis in the ( ) panel ., As shown in Fig . 2D , the whole ( ) panel is divided into four state regions , corresponding to those regions identified above ., Unsurprisingly , we find that the BGCT model can generate the SWD oscillation pattern only for appropriately intermediate and sufficiently long ., This observation is in consistent with our above finding , demonstrating the generalizability of our above results ., To estimate the frequency characteristics of different oscillation patterns , we compute the dominant frequency based on the spectral analysis in the ( ) panel ., For both the simple and SWD oscillation patterns , the dominant frequency is influenced by and , and increasing their values can both reduce the dominant frequency of neural oscillations ( Fig . 2E ) ., However , compared to , our results indicate that the delay may have a more significant effect on the dominant oscillation frequency ( Fig . 2E ) ., By combining the results in Figs ., 2D and 2E , we roughly outline the SWD oscillation region that falls into the 2\u20134 Hz frequency range ( asterisk region ) ., It is found that most of , but not all , the SWD oscillation region is contained in this specific region ., Here we emphasize the importance of this specific region , because the SWDs within this typical frequency range is commonly observed during the paroxysm of absence epilepsy in human patients 1 , 2 ., Why can the slow kinetics of receptors in TRN induce absence seizure activities ?, Anatomically , the SRN neurons receive the TRN signals from the inhibitory pathway mediated by both and receptors ., Under suitable condition , the double suppression caused by these two types of GABA receptors occurring at different time instants may provide an effective mechanism to create multiple firing peaks for the SRN neurons ( see below ) ., Such firing pattern of SRN in turn impacts the dynamics of cortical neurons , thus leading to the generation of SWDs ., It should be noted that , during the above processes , both and play critical roles ., In each oscillation period , after the -induced inhibition starts to suppress the firing of SRN neurons , these neurons need a certain recovery time to restore their mean firing rate to the rising state ., Theoretically , if this recovery time is shorter than the delay , another firing peak can be introduced to SRN neurons due to the latter -induced inhibition ., The above analysis implies that our model requires a sufficient long delay to ensure the occurrence of SWDs ., However , as described above , too long is also a potential factor which may push the dominant frequency of SWDs beyond the typical frequency range ., For a stronger , the inhibition caused by is also strong ., In this situation , it is obvious that the SRN neurons need a longer time to restore their firing rate ., As a consequent , a relatively longer is required for the BGCT model to ensure the occurrence of SWDs for stronger ( see Fig . 2D ) ., These findings provide consistent evidence that our developed BGCT model can replicate the typical absence seizure activities utilizing previously verified pathological mechanism ., Because we do not change the normal parameter values for basal ganglia during above studies , our results may also indicate that , even though the basal ganglia operate in the normal state , the abnormal alteration within the corticothalamic system may also trigger the onset of absence epilepsy ., Throughout the following studies , we set for all simulations ., For this choice , the delay parameter is within the physiological range and modest , allowing the generation of SWD oscillation pattern while preserving its dominant frequency around 3 Hz in most considered parameter regions ., It should be noted that , in additional simulations , we have shown that by slightly tuning the values of several parameters our developed BGCT model is also powerful to reproduce many other typical patterns of time series , such as the alpha and beta rhythms ( see Fig . S1 ) , which to a certain extent can be comparable with real physiological EEG signals 9 , 36 ., Using the developed BGCT model , we now investigate the possible roles of basal ganglia in controlling absence seizure activities ., Here we mainly concentrate on how the activation level of SNr influence the dynamics generated by the model ., This is because , on the one hand , the SNr is one of chief output nucleus of the basal ganglia to thalamus , and on the other hand , its firing activity has been found to be highly associated with the regulation of absence seizures 21 , 22 ., To this end , the excitatory coupling strength is employed to control the activation level of SNr and a three-step strategy is pursued in the present work ., In this and next subsections , we assess the individual roles of two different pathways emitted from SNr to thalamus ( i . e . , the SNr-TRN and SNr-SRN pathways ) in the control of absence seizures and discuss their corresponding biophysical mechanisms , respectively ., In the final two subsections , we further analyze the combination effects of these two pathways on absence seizure control and extend our results to more general cases ., To explore the individual role of the SNr-TRN pathway , we estimate both the state regions and frequency characteristics in the ( ) panel ., Note that during these investigations the SNr-SRN pathway is artificially blocked ( i . e . , ) ., With this \u201cnaive\u201d method , the modulation of absence seizure activities by the SNr-SRN pathway is removed and the effect caused by the SNr-TRN pathway is theoretically amplified to the extreme ., Similar to previous results , we find that the whole ( ) panel can be also divided into four different regions ( Fig . 3A ) ., These regions are the same as those defined above ., For weak inhibitory coupling strength , increasing the excitatory coupling strength moves the model dynamics from the SWD oscillation state to the saturation state ., Here we have to notice that the saturation state is a non-physiological brain state even though it does not belong to typical seizure activities ., In strong region , the suppression of SWDs is observed by decreasing the excitatory coupling strength , suggesting that inactivation of SNr neurons may result in seizure termination through the SNr-TRN pathway ( Fig . 3A , right side ) ., For strong enough inhibitory coupling strength , such suppression effect is rather remarkable that sufficiently low activation of SNr can even kick the network dynamics into the low firing region ( compare the results in Figs . 3C and 3D ) ., The SNr-TRN pathway induced SWD suppression is complicated and its biophysical mechanism is presumably due to competition-induced collision ., On the one side , the decrease of excitatory coupling strength inactivates the SNr ( Fig . 3E , top panel ) , which should potentially enhance the firing of TRN neurons ., On the other side , however , increasing the activation level of TRN tends to suppress the firing of SRN , which significantly reduces the firing of cortical neurons and in turn inactivates the TRN neurons ., Furthermore , the inactivation of cortical neurons also tends to reduce the firing level of TRN neurons ., As the excitatory coupling strength is decreased , the collision caused by such complicated competition and information interactions finally leads to the inactivation for all the TRN , SRN , and cortical neurons ( Fig . 3E , bottom panel ) , which potentially provides an effective mechanism to destabilize the original pathological balance within the corticothalamic system , thus causing the suppression of SWDs ., Indeed , we find that not only the dynamical state but also the oscillation frequency is greatly impacted by the activation level of SNr , through the SNr-TRN pathway ., For both the simple and SWD oscillation patterns , increasing the excitatory strength can enhance their dominant frequencies ., The combined results of Figs ., 3A and 3B reveal that , for a fixed , whether the model can generate the SWDs within the typical 2\u20134 Hz is determined by at least one and often two critical values of ( Fig . 3B , asterisk region ) ., Because the activation level of SNr is increased with the growth of , this finding further indicates that , due to effect of the SNr-TRN pathway , the model might exist the corresponding low and high triggering mean firing rates ( TMFRs ) for SNr neurons ( Fig . 3E , dashed lines ) ., If the long-term mean firing rate of SNr neurons falls into the region between these two TMFRs , the model can highly generate typical 2\u20134 Hz SWDs as those observed on the EEG recordings of absence epileptic patients ., In Fig . 3F , we plot both the low and high TMFRs as a function of the inhibitory coupling strength ., With the increasing of , the high TMFR grows rapidly at first and then reaches a plateau region , whereas the low TMFR almost linearly increases during this process ., Consequently , it can be seen that these two critical TMFRs approach each other as the inhibitory coupling strength is increased until they almost reach an identical value ( Fig . 3F ) ., The above findings indicate that the SNr-TRN pathway may play a vital role in controlling the absence seizures and appropriately reducing the activation level of SNr neurons can suppress the typical 2\u20134 Hz SWDs ., The similar antiepileptic effect induced by inactivating the SNr has been widely reported in previous electrophysiological experiments based on both genetic absence epilepsy rats and tottering mice 21\u201323 , 26 ., Note that , however , in literature such antiepileptic effect by reducing the activation of SNr is presumed to be accomplished through the indirect SNr-TRN pathway relaying at superior colliculus 21 , 22 ., Our computational results firstly suggest that such antiepileptic process can be also triggered by the direct SNr-TRN GABAergic projections ., Combining these results , we postulate that for real absence epileptic patients both of these two pathways might work synergistically and together provide a stable mechanism to terminate the onset of absence epilepsy ., We next turn on the SNr-SRN pathway and investigate whether this pathway is also effective in the control of absence seizures ., Similar to the previous method , we artificially block the SNr-TRN pathway ( i . e . , ) to enlarge the effect of the SNr-SRN pathway to the extreme ., Fig . 4A shows the two-dimensional state analysis in the ( ) panel , and again the whole panel is divided into four different state regions ., Compared to the results in Fig . 3A , the suppression of SWDs appears in a relatively weaker region by increasing the excitatory coupling strength ., This finding suggests that the increase in the activation of SNr can also terminate the SWDs , but through the SNr-SRN pathway ., For relatively weak within the suppression region , the SNr-SRN pathway induced suppression is somewhat strong ., In this case , the high activation level of SNr directly kicks the network dynamics into the low firing region , without undergoing the simple oscillation state ( Fig . 4C2 and compare with Fig . 4C3 ) ., Note that this type of state transition is a novel one which has not been observed in the SWD suppression caused by the SNr-TRN pathway ., For relatively strong within the suppression region , the double peak characteristic of SWDs generated by our model is weak ., In this situation , as the inhibitory coupling strength is increased , we observe that the network dynamics firstly transit from the SWD oscillation state to the simple oscillation state , and then to the low firing state ( Fig . 4C3 ) ., To understand how the SNr-SRN pathway induced SWD suppression arises , we present the mean firing rates of several key neural populations within the corticothalamic system , as shown in Fig . 4D ., It can be seen that increasing the strength significantly improves the activation level of SNr ( Fig . 4D , top panel ) , which in turn reduces the firing of SRN neurons ( Fig . 4D , bottom panel ) ., The inactivation of SRN neurons further suppresses the mean firing rates for both cortical and TRN neurons ( Fig . 4D , bottom panel ) ., These chain reactions lead to the overall inhibition of firing activities in the corticothalamic system , which weakens the double peak shaping effect due to the slow kinetics of receptors in TRN ., For strong , such weakening effect is considerable , thus causing the suppression of SWDs ., Our results provide the computational evidence that high activation of SNr can also effectively terminate absence seizure activities by the strong inhibition effect from the SNr-SRN pathway ., Compared to the SWD suppression induced by the SNr-TRN pathway , it is obvious that the corresponding biophysical mechanism caused by the SNr-SRN pathway is simpler and more direct ., Moreover , our two-dimensional frequency analysis indicates that the dominant frequency of neural oscillations depends on the excitatory coupling strength ( see Fig . 4B ) ., For a constant , progressive increase of reduces the dominant frequency , but not in a very significant fashion ., Thus , we find that almost all the SWD oscillation region identified in Fig . 4A falls into the typical 2\u20134 Hz frequency range ( Fig . 4B , asterisk region ) ., Unlike the corresponding results presented in previous subsection , the combination results of Figs ., 4A and 4B demonstrate that the BGCT model modulated by the isolated SNr-SRN pathway only exhibits one TMFR for SNr neurons ., For a suitably fixed strength , the generation of SWDs can be highly triggered when the mean firing rate of SNr neurons is lower than this critical firing rate ( Fig . 4D , dashed line ) ., With the increasing of , we observe that this TMFR rapidly reduces from a hi","headings":"Introduction, Materials and Methods, Results, Discussion","abstract":"Absence epilepsy is believed to be associated with the abnormal interactions between the cerebral cortex and thalamus ., Besides the direct coupling , anatomical evidence indicates that the cerebral cortex and thalamus also communicate indirectly through an important intermediate bridge\u2013basal ganglia ., It has been thus postulated that the basal ganglia might play key roles in the modulation of absence seizures , but the relevant biophysical mechanisms are still not completely established ., Using a biophysically based model , we demonstrate here that the typical absence seizure activities can be controlled and modulated by the direct GABAergic projections from the substantia nigra pars reticulata ( SNr ) to either the thalamic reticular nucleus ( TRN ) or the specific relay nuclei ( SRN ) of thalamus , through different biophysical mechanisms ., Under certain conditions , these two types of seizure control are observed to coexist in the same network ., More importantly , due to the competition between the inhibitory SNr-TRN and SNr-SRN pathways , we find that both decreasing and increasing the activation of SNr neurons from the normal level may considerably suppress the generation of spike-and-slow wave discharges in the coexistence region ., Overall , these results highlight the bidirectional functional roles of basal ganglia in controlling and modulating absence seizures , and might provide novel insights into the therapeutic treatments of this brain disorder .","summary":"Epilepsy is a general term for conditions with recurring seizures ., Absence seizures are one of several kinds of seizures , which are characterized by typical 2\u20134 Hz spike-and-slow wave discharges ( SWDs ) ., There is accumulating evidence that absence seizures are due to abnormal interactions between cerebral cortex and thalamus , and the basal ganglia may take part in controlling such brain disease via the indirect basal ganglia-thalamic pathway relaying at superior colliculus ., Actually , the basal ganglia not only send indirect signals to thalamus , but also communicate with several key nuclei of thalamus through multiple direct GABAergic projections ., Nevertheless , whether and how these direct pathways regulate absence seizure activities are still remain unknown ., By computational modelling , we predicted that two direct inhibitory basal ganglia-thalamic pathways emitting from the substantia nigra pars reticulata may also participate in the control of absence seizures ., Furthermore , we showed that these two types of seizure control can coexist in the same network , and depending on the instant network state , both lowing and increasing the activation of SNr neurons may inhibit the SWDs due to the existence of competition ., Our findings emphasize the bidirectional modulation effects of basal ganglia on absence seizures , and might have physiological implications on the treatment of absence epilepsy .","keywords":"theoretical biology, biology","toc":null} +{"Unnamed: 0":891,"id":"journal.pbio.1001125","year":2011,"title":"Combining Genome-Wide Association Mapping and Transcriptional Networks to Identify Novel Genes Controlling Glucosinolates in Arabidopsis thaliana","sections":"Biologists across fields possess a common need to identify the genetic variation causing natural phenotypic variation ., Genome-wide association ( GWA ) studies are a promising route to associate phenotypes with genotypes , at a genome-wide level , using \u201cunrelated\u201d individuals 1 ., In contrast to the traditional use of structured mapping populations derived from two parent genomes , GWA studies allow a wide sampling of the genotypes present within a species , potentially identifying a greater proportion of the variable loci contributing to polygenic traits ., However , the uneven distribution of this increased genotypic diversity across populations ( population structure ) , as well as the sheer number of statistical tests performed in a genome-wide scan , can cause detection of a high rate of \u201cfalse-positive\u201d genotype-phenotype associations that may make it difficult to distinguish loci that truly affect the tested phenotype 1\u20135 ., Epistasis and natural selection can also lead to a high false-negative rate , wherein loci with experimentally validated effects on the focal trait are not detected by GWA tests 4\u20135 ., Repeated detection of a genotype-phenotype association across populations or experiments has been proposed to increase support for the biological reality of that association , and has even been proposed as a requirement for validation of trait-phenotype associations 2 ., However , replication across populations or experiments is not solely dependent upon genotypes , but also differences in environment and development that significantly influence quantitative traits 5\u20138 ., Thus , validation of a significant association through replication , while at face value providing a stringent criterion for significance , may bias studies against detection of causal associations that show significant Genotype\u00d7Environment interactions 9 ., In this study we employed replicated genotypes to test the conditionality of GWA results upon the environment or development stage within which the phenotype was measured ., Integrating GWA mapping results with additional forms of genome-scale data , such as transcript profiling or proteomics datasets , has also been proposed to strengthen support for detected gene-trait associations and reduce the incidence of false-positive associations 10 ., To date , network approaches have largely focused upon comparing GWA results with natural variation in gene expression across genotypes in transcriptomic datasets ( i . e . , expression quantitative trait loci ( eQTLs ) ) 11\u201313 ., This requires that candidate genes show natural variation in transcript accumulation , which is not always the functional level at which biologically relevant variation occurs 14 ., Another network approach maps GWA results onto previously generated interaction networks within a single genotype , such as a protein-protein interaction network , enhancing support for associations that cluster within the network 15 ., This network filtering approach has yet to be tested with GWA data where the environment or tissue is varied ., To evaluate the influence of environmental or developmentally conditional genetics on GWA mapping and the utility of network filtering in identifying candidate causal genes , we focused on defense metabolism within the plant Arabidopsis thaliana ., A . thaliana has become a key model for advancing genetic technologies and analytical approaches for studying complex quantitative genetics in wild species 16 ., These advances include experiments testing the ability of genome resequencing and transcript profiling to elucidate the genetics of complex expression traits 17\u201319 and querying the complexity of genetic epistasis in laboratory and natural populations 20\u201326 ., Additionally , A . thaliana has long provided a model system for applying concepts surrounding GWA mapping 3\u20135 , 27\u201330 ., As a model set of phenotypes , we used the products of two related A . thaliana secondary metabolite pathways , responsible for aliphatic and indolic glucosinolate ( GSL ) biosynthesis ., These pathways have become useful models for quantitative genetics and ecology ( Figure 1 ) 31 ., Aliphatic , or methionine-derived , GSL are critical determinants of fitness for A . thaliana and related cruciferous species via their ability to defend against insect herbivory and non-host pathogens 32\u201335 ., Indolic GSL , derived from tryptophan , play important roles in resistance to pathogens and aphids 36\u201340 ., A . thaliana accessions display significant natural genetic variation controlling the production of type and amount of both classes of GSL , with direct impacts on plant fitness in the field 33 , 41\u201347 ., Additionally , GSL display conditional genetic variation dependent upon both the environment and developmental stage of measurement 48\u201351 ., GSL thus provide an excellent model to explore the impact of conditional genetics upon GWA analysis ., While the evolutionary and ecological importance of GSL is firmly established , the nearly complete description of GSL biosynthetic pathways provides an additional practical advantage to studying these compounds 52\u201354 ., A large number of QTL and genes controlling GSL natural variation have been cloned from A . thaliana using a variety of network biology approaches similar to network filtering in GWA studies ( Figure 1 ) 55\u201359 ., These provide a set of positive control genes of known natural variability and importance to GSL phenotypes , enabling empirical assessment of the level of false-positive and false-negative associations ., Within this study , we measure GSL phenotypes in two developmental stages and stress conditions\/treatments using a collection of wild A . thaliana accessions to test the relative influence of these components upon GWA ., In agreement with previous analyses from structured mapping populations , we found that differences in development have more impact on conditioning genetic variation in A . thaliana GSL accumulation ., This is further supported by our observation that GWA-identified candidate genes show a non-random distribution across the three datasets with the GWA candidates from the two developmental stages analyzed overlapping less than expected ., The large list of candidate genes identified via GWA was refined with a network co-expression approach , identifying a number of potential networks ., A subset of loci from these networks was validated for effects on GSL phenotypes ., Even for adaptive traits like GSL accumulation , these analyses suggest the influence of numerous small effect loci affecting the phenotype at levels that are potentially exposed to natural selection ., We measured GSL from leaves of 96 A . thaliana accessions at 35 d post-germination 27\u201328 using either untreated leaves or leaves treated with AgNO3 ( silver ) to mimic pathogen attack ., In addition , we measured seedling glucosinolates from the same accessions to provide a tissue comparison as well as a treatment comparison ., Seedlings were measured at 2 d post-germination at a stage where the GSL are largely representative of the GSL present within the mature seed 48 , 60 ., GSL from both foliar and seedling tissue grown under these conditions have been measured in multiple independent QTL experiments that used recombinant inbred line ( RIL ) populations generated from subsets of these 96 accessions , thus providing independent corroboration of observed GSL phenotypes 41 , 51 , 61 ., For the untreated leaves , this analysis detected 18 aliphatic GSL compounds and four indolic GSL compounds ., These combined with an additional 21 synthetic variables that describe discrete components of the biochemical pathway to total 43 GSLtraits for analysis 4 , 61\u201362 ., For the AgNO3-treated samples , we detected only 16 aliphatic GSL and four indolic GSL , but also were able to measure camalexin , which is related to indolic GSL ( Table S3 ) , which in combination with derived measures provided us with 42 AgNO3 treated GSL traits 61 ., For the seedling GSL samples , we detected 19 aliphatic GSLs , two indolic , and three seedling specific phenylalanine GSLs ( Table S4 ) , which in combination with derived descriptive variables gave us a total of 46 total GSL traits 61 ., Population stratification has previously been noted in this set of A . thaliana accessions , where eight subpopulations were proposed to describe the accessions genetic differences 27\u201328 ., Less explored is the joint effect of population structure and environmental factors , both external ( exogenous treatment ) and internal ( tissue comparison ) on GSL ., We used our three glucosinolate datasets to test for potential confounding effects of environmental variation , population structure , and their various interaction terms upon the GSL phenotypes ( Figure 2 ) ., On average , 36% ( silver versus control ) and 23% ( seedling versus control ) of phenotypic variance in GSL traits was solely attributable to accession ., An additional 7% ( silver versus control ) and 14% ( seedling versus control ) of phenotypic variance was attributable to an interaction between accession and treatment or tissue ., This suggests that , on average and given the statistical power of the experiments , 30%\u201350% of the detectable genetically controlled variance is stable across conditions , while at least 20% of the variance is conditional on treatment and\/or tissue ., In contrast , population structure by itself accounted for 10%\u201315% of total variance in GSL ( Figure 2 ) ., Interestingly , significantly less variance ( <5% ) could be attributed to interaction of treatment or tissue with population structure ., This suggests that for GSL , large-effect polymorphisms that may be linked with population structure are stable across treatment and tissue while the polymorphisms with conditional effects are less related to the species demographic structure ( Figure 2 ) ., This is consistent with QTL studies using RIL that find greater repeatability of large-effect QTL across populations and conditions than of treatment-dependent loci 41 , 51 , 61 , 63 ., This is further supported by the fact that we utilized replication of defined genotypes across all conditions and tissues and as such have better power to detect these effects than in systems where it is not possible to replicate genotypes ., As such , controlling for population structure will reduce the number of false-positives detected but lead to an elevated false-negative rate , given this significant association between the measured phenotypes and population structure ., Interestingly , developmental effects ( average of 15% ) accounted for 3 times more of the variation in GSL than environmental effects ( average 5% ) ., In particular , only three GSL ( two indolic GSL , I3M and 4MOI3M , and total indolic GSL ) were affected more strongly by AgNO3-treatment than by accession ( Table S1 and Figure S1 ) , whereas 11 GSL traits were found to be influenced more by tissue type than accession ( Table S2 ) ., This agrees with these indolic GSL being regulated by defense response 36 , 64 ., Similarly , twice as much GSL variation could be attributed to the interaction between accession and tissue type compared to the interaction between accession and AgNO3 treatment ., Thus , it appears that intraspecific genetic variation has greater impact on GSL in relation to development than in response to simulated pathogen attack ., Using 229 , 940 SNP available for this collection of 96 accessions , we conducted GWA-mapping for GLS traits in both the Seedling and Silver datasets using a maximum likelihood approach that accounts for genetic similarity ( EMMA ) 65 ., This identified a large number of significant SNPs and genes for both datasets ( Table 1 ) ., We tested the previously published criteria used to assess significance of candidate genes to ensure that different treatments or tissues did not bias the results produced under these criteria 4 ., These criteria required \u22651 SNP , \u22652 SNPs , or \u226520% of SNPs within a gene to show significant association with a specific GSL trait ., This test was independently repeated for all GSL traits in both datasets ( Tables S5 and S6 ) ., As previously found using the control leaf GSL data , the more stringent \u22652 SNPs\/gene criterion greatly decreased the overall number of significant genes identified while not overtly influencing the false-negative rate when using a set of GSL genes known to be naturally variable and causal within the 96 accessions ( Tables 2 and 3 ) ., Interestingly , including multiple treatments and tissues did not allow us to decrease the high empirical false-negative rate ( \u223c75% ) in identifying validated causal candidate genes ( Table, 3 ) 4 , 31 ., Using the \u22652 SNPs\/gene criterion identified 898 genes for GSL accumulation in silver-treated leaves and 909 genes for the seedling GSL data ., As previously found , the majority of these candidate genes were specific to a subset of GSL phenotypes and no gene was linked to all GSL traits within any dataset ( Figure S2 ) 4 ., We estimated the variance explained by the candidate GWA genes identified in this study using a mixed polygenic model of inheritance for each phenotype within each dataset using the GenABEL package in R 66\u201367 ., This showed that , on average , the candidate genes explained 37% of the phenotypic variation with a range of 1% to 99% ( Table S10 ) ., Interestingly , if the phenotypes are separated into their rough biosynthetic classes of indolic , long-chain , or short-chain aliphatic 68 , there is evidence for different levels of explained phenotypic variation where indolic has the highest percent variance at 45% while short-chain has the lowest at 25% ( p\\u200a=\\u200a0 . 001 ) ., This is not explainable by differential heritability as the short-chain aliphatic GSLs have the highest heritability in numerous studies including this one ( Tables S1 and S2 ) 4 , 41 , 61 ., This is instead likely due to the fact that short-chain aliphatic GLS show higher levels of multi-locus epistasis that complicates the ability to estimate the explained variance within GWA studies 31 , 41 , 61 ., Previous work with untreated GSL leaf samples showed that candidate genes clustered in hotspots , with the two predominant hotspots surrounding the previously cloned AOP and MAM loci 4 , where multiple polymorphisms surrounding the region of these two causal genes significantly associate with multiple GLS phenotypes ., We plotted GWA-identified candidate genes for GSL accumulation from the silver and seedling datasets to see if treatment or tissue altered this pattern ( Figure 3 ) ., Both datasets showed statistically significant ( p<0 . 05; Figure, 3 ) hotspots of candidate genes that clustered predominantly around the AOP and MAM loci with some minor treatment- or tissue-specific hotspots containing fewer genes ., This phenomenon is observed across multiple GLS traits ( Figure 3 ) ., The AOP and MAM hotspots are known to be generated by local blocks of linkage disequilibrium ( LD ) wherein a large set of non-causal genes are physically linked with the causal AOP2\/3 and MAM1\/3 genes 4 ., Interestingly , while the silver and control leaf GWA datasets showed similar levels of clustering around the AOP and MAM loci , the hotspot at the MAM locus was much more pronounced than the AOP locus in the seedling GWA dataset ( Figure 3 ) , suggesting more seedling GLS traits are associated with the MAM locus ., This agrees with QTL-mapping results in structured RIL populations of A . thaliana that have shown that the MAM\/Elong locus has stronger effects upon seedling GSL phenotypes in comparison to leaves , whereas the effect of the AOP locus is stronger in leaves than seedlings 41 , 62\u201363 ., In addition , the relationship of GSL phenotypes across accessions is highly similar in the two leaf datasets , while the phenotypic relationships across accessions are shifted when comparing the seedling to the leaf ( Figure 4 ) ., Together , this suggests greater similarity in the genetic variation affecting GSL phenotypic variation between the two leaf datasets than between leaf and seedling datasets , suggesting that GSL variation is impacted more by development than simulated pathogen attack ., This is further supported by the analysis of variance ( Figure 2 ) ., To further test if measuring the same phenotypes in different tissues or treatments will identify similar GWA mapping candidates , we investigated the overlap of GWA candidate genes identified across the three datasets ., For this analysis we excluded genes within the known AOP and MAM LD blocks as previous research has shown that all of these genes except the AOP and MAM genes are likely false-positives and would bias our overlap analysis 4 , 69\u201371 ., The remaining GWA mapping candidate genes showed more overlap between the two leaf datasets than between leaf and seedling datasets ( Figure 5 ) ., Interestingly , the overlap between GWA-identified candidate gene sets from seedling and leaf data was smaller than would be expected by chance ( \u03c72 p<0 . 001 for all three sectors ) ( Figure 5 ) ., This suggests that outside of the AOP and MAM loci , distinct sets of genetic variants may contribute to the observed phenotypic diversity in GSL across these tissues , which agrees with QTL-mapping studies identifying distinct GSL QTL for seedling and leaf 41 , 62\u201363 ., As such , focusing simply on GWA mapping candidates independently identified in multiple treatments or tissues to call true significant associations will overlook genes whose genotype-to-phenotype association is conditional upon differences in the experiments ., Similarly , the amount of phenotypic variance explained by the candidates differed between the datasets , with control and treated having the highest average explained variance , 39% and 41% , respectively ., In contrast , the seedling dataset had the lowest explained variance at 32% , similarly suggesting that altering the conditions of the experiments will change commonly reported summary variables such as explained variance ., GWA studies generally produce large lists of candidate genes , presumed to contain a significant fraction of false-positive associations ., One proposed strategy refines these results by searching for enrichment of candidate genes within pre-defined proteomic or transcriptomic networks 15 ., To test the applicability of this approach to our GWA study , we overlaid our list of 2 , 436 candidate genes ( excluding genes showing proximal LD to the causal AOP2\/3 and MAM1\/2\/3 genes 4 ) that associated with at least one GSL phenotype in at least one of the three datasets ( Figure, 5 ) onto a previously published co-expression network 72 ., If the network filtering approach is valid and there are true causal genes within the candidate gene lists , then the candidate genes should show tighter network linkages to previously validated causal genes than the average gene ., Measuring the distances between all candidate genes to all known GSL causal genes within the co-expression network showed that , for all datasets , the GWA candidate genes were on average closer to known causal genes than non-candidates ( Figure S4 ) ., Interestingly , the GWA mapping candidate genes actually showed closer linkages to the cysteine , homocysteine , and glutathione biosynthetic pathways than to the core GSL biosynthetic pathways , suggesting that natural variation in these pathways may impact A . thaliana secondary metabolism ( Figure S4 and Dataset S1 ) ., The network proximity of GWA mapping candidates to known causal genes supports the utility of the network filtering approach in identifying true causal genes among the long list of GWA mapping candidate genes ., To determine if this network filtering approach finds whole co-expression networks or isolated genes , we extended the co-expression network to include known and predicted GSL causal genes ( Table S7 ) ., The largest network obtained from this analysis centered on the core-biosynthetic genes for the aliphatic and tryptophan derived GSL as well as sulfur metabolism genes ( Figures 6 and S3 ) ., Interestingly , this large network linked to a defense signaling network represented by CAD1 , PEN2 , and EDS1 ( Figure, 6 ) 73 ., The defense signaling pathway associated with PEN2 and , more recently , CAD2 and EDS1 had previously been linked to altered GSL accumulation via both signaling and biosynthetic roles 36 , 39 , 74\u201375 ., However , the current network analysis has identified new candidate participants in this network altering GSL accumulation ., To test these predicted linkages , we obtained a mutant line possessing a T-DNA insertional disruption of the previously undescribed locus At4g38550 , which is linked to both CAD1 and PEN2 ( Figure 6 , Table S9 ) ., This mutant had elevated levels of all aliphatic GSL within the rosette leaves as well as 4-methoxyindol-3-ylmethyl GSL , shown to mediate non-host resistance ( Table S9 ) 36 , 39 ., These results suggest a role for At4g38550 in either defense responses or GSL accumulation ., Network analysis also identified several previously described ( RML1 ) and novel candidate ( ATSFGH , At1g06640 , and At1g04770 ) genes that were associated with the core-biosynthetic part of the network ., RML1 ( synonymous with PAD2 , CAD2 ) , a biosynthetic enzyme for glutathione , has previously been shown to control GSL accumulation either via a signaling role or actual biosynthesis of glutathione 74\u201375 ., To test if ATSFGH ( S-formylglutathione hydrolase , At2g41530 ) , At1g06640 ( unknown 2-oxoacid dependent dioxygenase \u2013 2-ODD ) , or At1g04770 ( tetratricopeptide containing protein ) may play a role in GSL accumulation , we obtained insertional mutants ., This showed that the disruption of At1g06640 led to significantly increased accumulation of the short-chain methylsulfinyl GSL but not the corresponding methylthio or long-chain GSL ( Table S9 ) ., In contrast , the AtSFGH mutant had elevated levels of all short-chain GSL along with a decreased accumulation of the long-chain 8-MTO GSL ( Table S9 ) ., The At1g04770 mutant showed no altered GSL levels other than a significantly decreased accumulation of 8-MTO GSL ( Table S9 ) ., This suggests that these genes alter GSL accumulation , although the specific molecular mechanism remains to be identified ., Interestingly , network membership is not sufficient to predict a GSL impact , as T-DNA disruption of homoserine kinase ( At2g17265 ) , a gene co-expressed with the GSL core but not a candidate from the GWA analysis , had no detectable impact upon GSL accumulation ( Table S9 ) ., Thus , the network filtering approach identified genes closely linked to the GSL biosynthetic network that can control GSL accumulation and are GWA-identified candidate genes ., The above analysis shows that GWA candidate genes which co-express with known GSL genes are likely to influence GSL accumulation ., However , networks might influence GSL accumulation independent of co-expression with known GSL genes ., To test this , we investigated several co-expression networks that involved solely GWA-identified candidate genes and genes not previously implicated in influencing GSL accumulation ( Figure 7 ) ., Three of these networks included genes that affect natural variation in non-GSL phenotypes within A . thaliana , namely PHOTOTROPIN 2 ( PHOT2 ) , Erecta ( ER ) 76 , and ELF3\/GI ( Figure 7 ) 77 , 78 ., The fourth network did not involve any genes previously linked to natural variation ( Figure 7 ) ., We obtained A . thaliana seed stocks with mutations in a subset of genes for each of these three networks to test whether loss of function at these loci affects GSL accumulation ., The largest network containing no previously known GSL-related genes that we examined is a blue light\/giberellin signaling pathway represented by PHOT2 ( Figure 7A ) ., This pathway had not been previously ascribed any role in GSL accumulation in A . thaliana ., We tested this GWA-identified association by measuring GSL in the single and double PHOT1\/PHOT2 mutants 79 ., PHOT1 was included as it has been shown to function either redundantly or epistatically with PHOT2 79 ., The single phot1 or phot2 mutation had no significant effect upon GSL accumulation ( Table S9 ) ., The double phot1\/phot2 knockout plants showed a significant increase in the production of detected methylthio GSL as well as a decrease in the accumulation of 3-carbon GSL compared to control plants ., Thus , it appears that GSL are influenced by the PHOT1\/PHOT2 signaling pathway , possibly in response to blue light signaling ( Table S9 ) ., This agrees with previous reports from Raphanus sativa that blue light controls GSL 80 , 81 ., The second non-GSL network we examined contains the ER gene ( Figure 7B ) ., The ER ( Erecta ) network and specifically the ER locus had previously been queried for the ability to alter GSL accumulation using two Arabidopsis RIL populations ( Ler\u00d7Col-0 and Ler\u00d7Cvi ) that segregate for a loss-of-function allele at the ER locus 41 , 51 , 63 , 82\u201386 ., In these analyses , the ER locus was linked to seed\/seedling GSL accumulation in only one of the two populations and not linked to mature leaf GSL accumulation 41 , 86 ., Analysis of the ER mutant within the Col-0 genotype showed that the Erecta gene does influence GSL content within leaves as suggested by the GWA results ( Table S9 , Figure 7A ) ., Plants with loss of function at Erecta showed increased levels of methylthio GSL , long-chain GSL , and 4-substituted indole GSL ( Table S9 ) ., Interestingly , the ER network contains a number of chromatin remodeling genes ., We obtained A . thaliana lines with loss-of-function mutations in three of these genes ( Table S9 ) to test if the extended network also alters GSL accumulation ., Mutation of two of the three genes ( At5g18620 \u2013 CHR17 and At4g02060 \u2013 PRL ) was associated with increased levels of short-chain aliphatic GSL and a corresponding decrease in long-chain aliphatic GSL ( Table S9 ) ., This shows that the Erecta network has the capacity to influence GSL accumulation ., Two smaller networks containing the ELF3 and GI genes were of interest as these two genes are associated with natural variation in the A . thaliana circadian clock ( Figure 7C ) 77 , 87 , 88 ., GSL analysis showed that both the elf3 and gi mutants had lower levels of aliphatic GSL than controls ( Table S9 ) ., Comparing multiple gi mutants from both the Col-0 and Ler genetic backgrounds showed that only gi mutants in the Col-0 background altered GSL accumulation ( Table S9 ) ., This suggests that gis link to glucosinolates is epistatic to other naturally variable loci within the genome , as previously noted for natural GI alleles in relation to other phenotypes ( Table S9 ) 78 ., An analysis of the elf4 mutant which has morphological similarities to elf3-1 but was not a GWA-identified candidate showed that this mutation did not alter GSL accumulation ., Thus , elf3\/gi affects GSL via a more direct mechanism than altering plant morphology ., Given two genes in the circadian clock network directly affects GSL accumulation and given the expression of these two genes are correlated with other genes in the network , it is fair to hypothesize that circadian clock plays a role in GSL accumulation ., While the GSL phenotypes of the above laboratory-generated mutants suggest that variation in circadian clock plays a role in GSL accumulation , they do not prove that the natural alleles at these genes affect GSL accumulation ., To validate this , we leveraged germplasm developed in the course of previous research showing that natural variation at the ELF3 locus controls numerous phenotypes , including circadian clock periodicity and flowering time 77 ., We utilized quantitative complementation lines to test if natural variation at ELF3 also generates differences in GSL content 77 ., This showed that the ELF3 allele from the Bay-0 accession was associated with a higher level of short chain aliphatic GSL accumulation in comparison to plants containing the Sha allele ( Table S9 ) ., In contrast , both Bay-0 and Sha allele-bearing plants had elevated levels of 8-MTO GSL in comparison to Col-0 ( Tables S8 and S9 ) ., Thus , ELF3 is a polymorphic locus that contains multiple distinct alleles that influence GSL content within the plant and the ELF3\/GI network causes natural variation in GSL content ., The final network examined here , represented by CLPX ( CLP protease ) , is likely involved in chlorophyll catabolism and possibly also chloroplast senescence 89 ., This network is uncharacterized and has not previously been associated with GSL accumulation or natural variation in any phenotype , but participation in chloroplast degradation is suggested by transcriptional correlation of CLPX with several catabolism genes ., Analysis of mutants deficient in function for two of these genes showed that they all possessed increased aliphatic GSL in comparison to wild-type controls ., These results suggest that natural variation in this putative network could influence GSL content in A . thaliana ., The majority ( 12 of 13 ) of genes in this network show significant variation in transcript abundance across A . thaliana accessions , a significantly greater proportion than expected by chance ( X2 p<0 . 001 ) 90\u201392 , further suggesting that this network may contribute to GSL variation across the accessions ., Finally , we tested a single two gene network found in the co-expression data wherein both genes had been annotated but not previously linked to GSL content ., This network involved AtPTR3 ( a putative peptide transporter , At5g46050 ) and DPL1 ( a dihydrosphingosine lyase , At1g27980 ) ., T-DNA mutants in both genes appeared to be lethal as we could not identify homozygous progeny ., However , comparison of the heterozygous progeny to wildtype homozygotes showed that mutants in both genes led to elevated levels of aliphatic GSL ( Table S9 ) ., Thus , there are likely more networks that are causal for GSL variation within this dataset that remain to be tested ., While GSL are considered \u201csecondary\u201d metabolites , these compounds are affected by many aspects of plant metabolism , thus GSL phenotyping is sensitive to any genetic perturbation that affects plant physiology ., As such , we identified six genes that were expressed in mature leaves but did not show any significant association of DNA sequence polymorphism with GSL phenotypes and were additionally not identified within any of the above co-expression networks ., Insertional mutants disrupted at these loci were designated as random mutant controls ( Table S9 ) ., Analyzing GSL within these six lines showed that on average 13%\u00b14% of the GSL were affected in the random control mutant set even after correction for multiple testing ., While this suggests that GSL may be generally sensitive to mutations affecting genes expressed within the leaf , this incidence of significant GSL effects is much lower than observed for the T-DNA mutants selected to test GWA mapping-identified pathways ( CLPX - 78%\u00b111% , PTR3 \u2013 61%\u00b16% , Erecta \u2013 45%\u00b110% , GSL \u2013 46%\u00b111% , ELF3\/GI \u2013 53%\u00b117% ) ., In all cases the mutants deficient in GWA pathway-identified gene function showed significantly greater numbers of altered GSL phenotypes than the negative control T-DNA mutant set ( X2 , p<0 . 001 ) , suggesting that combining GWA-identified candidate genes with co-expression networks successfully identifies genes with the capacity to cause natural variation in GSL content ., Identifying the specific mechanisms involved will require significant future research ., A limiting factor for the utility of GWA studies has been the preponderance of false-positive and false-negative associations which makes the accurate prediction of biologically valid genotype-phenotype associations very difficult ., In this report , we describe the implementation and validation of a candidate gene co-expression filter that has given us a high success rate in candidate gene vali","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Genome-wide association ( GWA ) is gaining popularity as a means to study the architecture of complex quantitative traits , partially due to the improvement of high-throughput low-cost genotyping and phenotyping technologies ., Glucosinolate ( GSL ) secondary metabolites within Arabidopsis spp ., can serve as a model system to understand the genomic architecture of adaptive quantitative traits ., GSL are key anti-herbivory defenses that impart adaptive advantages within field trials ., While little is known about how variation in the external or internal environment of an organism may influence the efficiency of GWA , GSL variation is known to be highly dependent upon the external stresses and developmental processes of the plant lending it to be an excellent model for studying conditional GWA ., To understand how development and environment can influence GWA , we conducted a study using 96 Arabidopsis thaliana accessions , >40 GSL phenotypes across three conditions ( one developmental comparison and one environmental comparison ) and \u223c230 , 000 SNPs ., Developmental stage had dramatic effects on the outcome of GWA , with each stage identifying different loci associated with GSL traits ., Further , while the molecular bases of numerous quantitative trait loci ( QTL ) controlling GSL traits have been identified , there is currently no estimate of how many additional genes may control natural variation in these traits ., We developed a novel co-expression network approach to prioritize the thousands of GWA candidates and successfully validated a large number of these genes as influencing GSL accumulation within A . thaliana using single gene isogenic lines ., Together , these results suggest that complex traits imparting environmentally contingent adaptive advantages are likely influenced by up to thousands of loci that are sensitive to fluctuations in the environment or developmental state of the organism ., Additionally , while GWA is highly conditional upon genetics , the use of additional genomic information can rapidly identify causal loci en masse .","summary":"Understanding how genetic variation can control phenotypic variation is a fundamental goal of modern biology ., A major push has been made using genome-wide association mapping in all organisms to attempt and rapidly identify the genes contributing to phenotypes such as disease and nutritional disorders ., But a number of fundamental questions have not been answered about the use of genome-wide association: for example , how does the internal or external environment influence the genes found ?, Furthermore , the simple question of how many genes may influence a trait is unknown ., Finally , a number of studies have identified significant false-positive and -negative issues within genome-wide association studies that are not solvable by direct statistical approaches ., We have used genome-wide association mapping in the plant Arabidopsis thaliana to begin exploring these questions ., We show that both external and internal environments significantly alter the identified genes , such that using different tissues can lead to the identification of nearly completely different gene sets ., Given the large number of potential false-positives , we developed an orthogonal approach to filtering the possible genes , by identifying co-functioning networks using the nominal candidate gene list derived from genome-wide association studies ., This allowed us to rapidly identify and validate a large number of novel and unexpected genes that affect Arabidopsis thaliana defense metabolism within phenotypic ranges that have been shown to be selectable within the field ., These genes and the associated networks suggest that Arabidopsis thaliana defense metabolism is more readily similar to the infinite gene hypothesis , according to which there is a vast number of causative genes controlling natural variation in this phenotype ., It remains to be seen how frequently this is true for other organisms and other phenotypes .","keywords":"genome-wide association studies, functional genomics, plant biology, population genetics, metabolic networks, plant science, genome complexity, genetic polymorphism, plant genetics, biology, systems biology, plant biochemistry, genetics, genomics, evolutionary biology, gene networks, computational biology, genetics and genomics","toc":"Genome-wide association mapping is highly sensitive to environmental changes, but network analysis allows rapid causal gene identification."} +{"Unnamed: 0":53,"id":"journal.pcbi.1002714","year":2012,"title":"Confidence-based Somatic Mutation Evaluation and Prioritization","sections":"Next generation sequencing ( NGS ) has revolutionized our ability to determine genomes and compare , for example , tumor to normal cells to identify somatic mutations ., However , the platform is not error free and various experimental and algorithmic factors contribute to the false positive rate when identifying somatic mutations 1 ., Indeed , recent studies report validation rates of 54% 2 ., Error sources include PCR artifacts , biases in priming 3 , 4 and targeted enrichment 5 , sequence effects 6 , base calling causing sequence errors 7 , variations in coverage , and uncertainties in read alignments 8 , such as around insertions and deletions ( indels ) 9 ., Reflecting the rapid development of bench and computational methods , algorithms to identify somatic mutations from NGS data are still evolving rapidly ., Remarkably , the congruence of identified mutations between current algorithms is less than 50% ( below ) ., Given the large discrepancies , one is left wondering which mutations to select , such as for clinical decision making or ranking for follow-up experiments ., Ideal would be a statistical value , such as a p-value , indicating the confidence of each mutation call ., Error sources have been addressed by examining bulk sets of mutations , such as computational methods to measure the expected amount of false positive mutation calls utilizing the transition\/transversion ratio of a set of variations 10 , 11 , machine learning 12 and inheritance errors when working with family genomes 13 or pooled samples 14 , 15 ., Druley et al . 13 optimized variation calls using short plasmid sequence fragments for optimization ., The accuracy of calling germline variations , i . e . single nucleotide polymorphisms ( SNPs ) , has been addressed by validating SNPs using other techniques such as genotyping microarrays 15 ., Thus , these methods enable a comparison of methods to identify and characterize error sources , but they do not assign a ranking score to individual mutation ., Several NGS mutation identification algorithms do output multiple parameters for each mutation call , such as coverage , genotype quality and consensus quality ., However , it is not clear if and how to interpret these metrics with regards to whether a mutation call is correct ., Furthermore , multiple parameters are generated for each mutation call and thus one simply cannot rank or prioritize mutations using the values ., Instead , researchers often rely on personal experience and arbitrary filtering thresholds to select mutations ., In summary ,, a ) there is a low level of congruence between somatic mutations identified by different algorithms and sequencing platforms and, b ) no method to assign a single accuracy estimate to individual mutations ., Here , we develop a methodology to assign a confidence value - a false discovery rate ( FDR ) - to individual identified mutations ., This algorithm does not identify mutations but rather estimates the accuracy of each mutation ., The method is applicable both to the selection and prioritization of mutations and to the development of algorithms and methods ., Using Illumina HiSeq reads and the algorithms GATK , SAMtools and SomaticSNiPer , we identified 4 , 078 somatic mutations in B16 melanoma cells ., We assigned a FDR to each mutation and show that 50 of 50 mutations with low FDR ( high confidence ) validated while 0 of 44 with high FDR ( low confidence ) validated ., To discover mutations , DNA from tail tissue of three black6 mice , all litter mates , and DNA from three B16 melanoma samples , was extracted and exon-encoding sequences were captured , resulting in six samples ., RNA was extracted from B16 cells in triplicate ., Single end 50 nt ( 1\u00d750 nt ) and paired end 100 nt ( 2\u00d7100 nt ) reads were generated on an Illumina HiSeq 2000 ( Supplementary Table S1 in Text S1 ) ., Each sample was sequenced on an individual lane , resulting in an average of 104 million reads per lane ., DNA reads were aligned to the mouse reference genome using the Burrows-Wheeler Alignment Tool ( bwa ) 16 and RNA reads were aligned with bowtie 17 ., Using the 1\u00d750 nt reads , 97% of the targeted nucleotides were covered at least once , the mean\/median targeted nucleotide coverage was 38\u00d7\/30\u00d7 and 70\u201373% of target nucleotides had 20\u00d7 or higher coverage ., Using the 2\u00d7100 nt reads , 98% of the targeted nucleotides were covered at least once , the mean\/median targeted nucleotide coverage the was 165\u00d7\/133\u00d7 and 97% of target nucleotides had 20\u00d7 coverage ., Somatic mutations were independently identified using the software packages SAMtools 18 , GATK 11 and SomaticSNiPer 19 ( Figure 2 ) by comparing the single nucleotide variations found in B16 samples to the corresponding loci in the black6 samples ( B16 cells were originally derived from a black6 mouse ) ., The potential mutations were filtered according to recommendations from the respective software tools ( SAMtools and GATK ) or by selecting an appropriate threshold for the somatic score of SomaticSNiPer ( Methods ) ., Considering only those mutations found in all tumor-normal pairings , the union of B16 somatic mutations identified by the three algorithms was 4 , 078 ( Figure 3a ) ., However , substantial differences between the sets of mutations identified by each program exist , even when considering those mutations found in all tumor-normal pairings ( Figure 3a ) ., While 1 , 355 mutations are identified by all three programs ( 33% of 4 , 078 ) , the agreement between results is low ., Of the 2 , 484 mutations identified by GATK , only 1 , 661 ( 67% ) are identified by SAMtools and 1 , 469 ( 60% ) are identified by SomaticSNiPer ., Of the 3 , 109 mutations identified by SAMtools , only 53% and 66% are identified by GATK and SomaticSNiPer , respectively ., Of the 2 , 302 mutation identified by SomaticSNiPer , only 64% and 89% are identified by GATK and SAMtools , respectively ., The number of 1 , 355 mutations identified by all three algorithms reflects only 55% ( GATK ) , 44% ( SAMtools ) and 59% ( SomaticSNiPer ) of the mutations found by the individual programs , respectively ., We want to assign each somatic mutation a single quality score Q that could be used to rank mutations based on confidence ., However , it is not straightforward to assign a single value since most mutation detection algorithms output multiple scores , each reflecting a different quality aspect ., Thus , we generated a random forest classifier 20 that combines multiple scores , resulting in a single quality score Q ( Methods ) ., All identified somatic mutations , whether from the \u201csame versus same\u201d or \u201ctumor versus normal\u201d comparison , thus are assigned a single value predicting accuracy ., Note that the classifier training needs to be performed separately for each program , due to the differences in the set of scores which are returned by the individual programs ., After defining a relevant quality score , we sought to re-define the score into a statistically relevant false discovery rate ( FDR ) ., We determined , at each Q value , the number of mutations with a better Q score in the \u201csame versus same\u201d and the number of mutations with a better Q score in the \u201ctumor versus normal\u201d pair ., For a given mutation with quality score Q detected in the \u201ctumor versus normal\u201d comparison , we estimate the false discovery rate by computing the ratio of \u201csame versus same\u201d mutations with a score of Q or better to the overall number of mutations found in the tumor comparison with a score of Q or better ., A potential bias in comparing methods is differential coverage; we thus normalize the false discovery rate for the number of bases covered by NGS reads in each sample: We calculate the common coverage by counting all bases of the reference genome which are covered by data of the tumor and normal sample or by both \u201csame versus same\u201d samples , respectively ., After assigning our FDR to each mutation , the FDR-sorted list of somatic mutations shows a clear preference of mutations found by three programs in the low FDR region ( Figure 3b; see Supplementary Dataset S1 for a complete list ) ., This observation fits to the na\u00efve assumption that the consensus of multiple different algorithms is likely to be correct ., We identified 50 mutations with a low FDR ( high confidence ) for validation , including 41 with an FDR less than 0 . 05 ( Figure 3c ) ., All 50 were validated by a combination of Sanger resequencing and inspection of the B16 RNA-Seq sequence reads ., Table 1 lists the ten somatic mutations with the best FDRs , all of which validated ., We selected 44 mutations identified by at least one detection algorithm , present in only one B16 sample and assigned a high FDR ( >0 . 5 ) by our algorithm ( Figure 3c ) ., In contrast to the low-FDR mutations , none of the 44 high FDR samples validated , neither by Sanger sequencing nor by inspection of the RNA alignments ., 37 of those mutations were clear false positives ( no mutation by Sanger or RNA-Seq ) while the remaining seven loci neither yielded sequencing reactions nor were covered by RNA-Seq reads ., Figure 4 shows representative mutations together with the Sanger sequencing traces ., In the case of the false positive mutation , the three used programs identified this in black6 as sequencing error ( and did not output a mutation at this locus ) , but failed in the single B16 case ( marked with the red box ) ., If a real experiment would have included only this single sample , it would have produced a false positive mutation call , despite using the consensus of three programs ., To test mutations with less extreme FDRs , we selected 45 somatic mutations , which were distributed evenly across the FDR spectrum from 0 . 1 to 0 . 6 ., Validation using both Sanger sequencing and inspection of the RNA-Seq reads resulted 15 positive ( either Sanger sequencing or RNA-Seq reads ) , 22 negative validations ( neither Sanger sequencing nor RNA-Seq reads ) and 8 non-conclusive ( failed sequencing reactions and no RNA-Seq coverage ) ., See the Supplementary Dataset S2 for a detailed table showing the results of the validation of those 45 mutations ., We computed a receiver operating characteristic ( ROC ) curve for all 131 validated mutations ( Figure 5a ) , resulting in an area under the curve ( AUC ) 21 of 0 . 96 ., As this analysis might be biased due to the relatively large set sizes of the high and low FDR mutations , we randomly sampled 10 mutations each , added the 37 validated mutations with the intermediate FDRs , calculated the ROC-AUC and repeated this 1000 times in order to get a more robust performance estimate ., The resulting mean AUC is 0 . 797 ( +\u22120 . 002 ) ., A systematic test of FDR thresholds ranging from zero to one with a step size of 0 . 05 implies that an optimal threshold for using the FDR as a binary classifier should be \u22640 . 2 ., ROC curves and the corresponding AUC are useful for comparing classifiers and visualizing their performance 21 ., We extended this concept for evaluating the performance of experimental and computational procedures ., However , plotting ROC graphs requires knowledge of all true and false positives ( TP and FP ) in a dataset , information which is usually not given and hard to establish for high throughput data ( such as NGS data ) ., Thus , we used the calculated FDRs to estimate the respective TP and FP rates and plot a ROC curve and calculate the AUC ( Figure 1c ) ., Figure 5b shows the ROC curve comparing the FDR versus the percent of 50 validated mutations and percent of total ., ROC curves and the associated AUC values can be compared across experiments , lab protocols , and algorithms ., For the following comparisons , we used all somatic mutations found by any algorithm and in any tumor-normal pairing without applying any filter procedure ., We considered only those mutations in target regions ( exons ) ., First , we tested the influence of the reference \u201csame versus same\u201d data on the calculation of the FDRs ., Using the triplicate black6 and B16 sequencing runs , we created 18 triplets ( combinations of \u201cblack6 versus black6\u201d and \u201cblack6 versus B16\u201d ) to use for calculating the FDR ., When comparing the resulting FDR distributions for the sets of somatic mutations , the results are consistent when the reference data sets are exchanged ( Figure 1c , Supplementary Figure S2 in Text S1 ) ., This suggests that the method is robust with regards to the choice of the reference \u201csame versus same\u201d dataset ., Thus , a \u201csame versus same\u201d duplicate profiling needs only be done once for a given lab platform and the resultant FDR ( Q ) reference function can be re-used for future profiling ., Using our definition of a false discovery rate , we have established a generic framework for evaluating the influence of numerous experimental and algorithmic parameters on the resulting set of somatic mutations ., We apply this framework to study the influence of software tools , coverage , paired end sequencing and the number of technical replicates on somatic mutation identification ., First , the choice of the software tool has a clear impact on the identified somatic mutations ( Figure 3 ) ., On the tested data , SAMtools produces the highest enrichment of true positive somatic mutations ( Figure 6a ) ., We note that each tool has different parameters and quality scores for mutation detection; we used the default settings as specified by the algorithm developers ., The impact of the coverage depth on whole genome SNV detection has been recently discussed 22 ., For the B16 sequencing experiment , we sequenced each sample in an individual flowcell lane and achieved a target region mean base coverage of 38 fold across target nucleotides ., In order to study the effect of the coverage on exon capture data , we down-sampled the number of aligned sequence reads for every 1\u00d750 nt library to generate a mean coverage of 5 , 10 and 20 fold , respectively , and then reapplied the mutation identification algorithms ., As expected , a higher coverage results in a better ( i . e . fewer false positives ) somatic mutation set , although the improvement from the 20 fold coverage to 38 fold is marginal for the B16 cells ( Figure 6b ) ., It is straightforward to simulate and rank other experimental settings using the available data and framework ( Figures 6c and d ) ., As we profiled each sample in triplicate , including three separate exon captures , we wanted to identify the impact of these replicates ., Comparing duplicates to triplicates , triplicates do not offer a benefit compared to the duplicates ( Figure 6c ) , while duplicates offer a clear improvement compared to a study without any replicates ( indicated by the higher AUC ) ., In terms of the ratio of somatic mutations at a FDR of 0 . 05 or less , we see enrichment from 24% for a run without replicates to 71% for duplicates and 86% for triplicates ., These percentages correspond to 1441 , 1549 and 1524 mutations , respectively ., Using the intersection of triplicates removes more mutations with low FDRs than mutations with a high FDR , as indicated by the lower ROC AUC and the shift of the curve to the right ( Supplementary Figure S7 in Text S1 , Figure 6c ) : the specificity is slightly increased at the cost of a lower sensitivity , when assuming that removed low FDR mutations are true positives and the removed high FDR mutations are true negatives ., This assumption is supported by our validation experiments , as true negative mutations are likely to get a high FDR ( Figure 5a ) ., The 2\u00d7100 nt library was used to create 6 libraries: a 2\u00d7100 nt library; a 1\u00d7100 nt library; a 1\u00d750 nt library using the 50 nucleotides at the 5\u2032 end of the first read; a 1\u00d750 nt library using the nucleotides 51 to 100 at the 3\u2032 end of the first read; a 2\u00d750 nt read using nucleotides 1 to 50 of both reads; and a 2\u00d750 nt library using nucleotides 51 to 100 of both reads ., These libraries were compared using the calculated FDRs of predicted mutations ( Figure 6d ) ., The 1\u00d750 3\u2032 library performed worst , as expected due to the increasing error rate at the 3\u2032 end of sequence reads ., Despite the much higher median coverage ( 63\u201365 vs . 32 ) , the somatic mutations found using the 2\u00d750 5\u2032 and 1\u00d7100 nt libraries have a smaller AUC than the 1\u00d750 nt library ., This surprising effect is a result of high FDR mutations in regions with low coverage ( Supplementary Text S1 ) ., Indeed , the sets of low FDR mutations are highly similar ., Thus , while the different read lengths and types identify non-identical mutations , the assigned FDR is nevertheless able to segregate true and false positives ( Supplementary Figure S3 in Text S1 ) ., NGS is a revolutionary platform for detecting somatic mutations ., However , the error rates are not insignificant , with different detection algorithms identifying mutations with less than 50% congruence ., Other high throughput genomic profiling platforms have developed methods to assign confidence values to each call , such as p-values associated with differential expression calls from oligonucleotide microarray data ., Similarly , we developed here a method to assign a confidence value ( FDR ) to each identified mutation ., From the set of mutations identified by the different algorithms , the FDR accurately ranks mutations based on likelihood of being correct ., Indeed , we selected 50 high confidence mutations and all 50 validated; we selected 45 intermediate confidence mutations and 15 validated , 22 were not present and 8 inconclusive; we selected 44 low confidence mutations and none validated ., Again , all 139 mutations were identified by at least one of the detection algorithms ., Unlike a consensus or majority voting approach , the assigned FDR not only effectively segregates true and false positives but also provides both the likelihood that the mutation is true and a statistically ranking ., Also , our method allows the adjustment for a desired sensitivity or specificity which enables the detection of more true mutations than a consensus or majority vote , which report only 50 or 52 of all 65 validated mutations ., We applied the method to a set of B16 melanoma cell experiments ., However , the method is not restricted to these data ., The only requirement is the availability of a \u201csame versus same\u201d reference dataset , meaning at least a single replicate of a non-tumorous sample should be performed for each new protocol ., Our experiments indicate that the method is robust with regard to the choice of the replicate , so that a replicate is not necessarily required in every single experiment ., Once done , the derived FDR ( Q ) function can be reused when the Q scores are comparable ( i . e . when the same program for mutation discovery was used ) ., Here , we profiled all samples in triplicate; nevertheless , the method produces FDRs for each mutation from single-run tumor and normal profiles ( non-replicates ) using the FDR ( Q ) function ., We do show , however , that duplicates improve data quality ., Furthermore , the framework enables one to define best practice procedures for the discovery of somatic mutations ., For cell lines , at least 20-fold coverage and a replicate achieve close to the optimum results ., A 1\u00d750 nt library resulting in approximately 100 million reads is a pragmatic choice to achieve this coverage ., The possibility of using a reference data set to rank the results of another experiment can also be exploited to e . g . score somatic mutations found in different normal tissues by similar methods ., Here , one would expect relatively few true mutations , so an independent set of reference data will improve the resolution of the FDR calculations ., While we define the optimum as the lowest number of false positive mutation calls , this definition might not suffice for other experiments , such as for genome wide association studies ., However , our method allows the evaluation of the sensitivity and specificity of a given mutation set and we show application of the framework to four specific questions ., The method is by no means limited to these parameters , but can be applied to study the influence of all experimental or algorithmic parameters , e . g . the influence of the alignment software , the choice of a mutation metric or the choice of vendor for exome selection ., In summary , we have pioneered a statistical framework for the assignment of a false-discovery-rate to the detection of somatic mutations ., This framework allows for a generic comparison of experimental and computational protocol steps on generated quasi ground truth data ., Furthermore , it is applicable for the diagnostic or therapeutic target selection as it is able to distinguish true mutations from false positives ., Next-generation sequencing , DNA sequencing: Exome capture for DNA resequencing was performed using the Agilent Sure-Select solution-based capture assay 23 , in this case designed to capture all known mouse exons ., 3 \u00b5g purified genomic DNA was fragmented to 150\u2013200 nt using a Covaris S2 ultrasound device ., gDNA fragments were end repaired using T4 DNA polymerase , Klenow DNA polymerase and 5\u2032 phosphorylated using T4 polynucleotide kinase ., Blunt ended gDNA fragments were 3\u2032 adenylated using Klenow fragment ( 3\u2032 to 5\u2032 exo minus ) ., 3\u2032 single T-overhang Illumina paired end adapters were ligated to the gDNA fragments using a 10\u22361 molar ratio of adapter to genomic DNA insert using T4 DNA ligase ., Adapter ligated gDNA fragments were enriched pre capture and flow cell specific sequences were added using Illumina PE PCR primers 1 . 0 and 2 . 0 and Herculase II polymerase ( Agilent ) using 4 PCR cycles ., 500 ng of adapter ligated , PCR enriched gDNA fragments were hybridized to Agilents SureSelect biotinylated mouse whole exome RNA library baits for 24 hrs at 65\u00b0C ., Hybridized gDNA\/RNA bait complexes where removed using streptavidin coated magnetic beads ., gDNA\/RNA bait complexes were washed and the RNA baits cleaved off during elution in SureSelect elution buffer leaving the captured adapter ligated , PCR enriched gDNA fragments ., gDNA fragments were PCR amplified post capture using Herculase II DNA polymerase ( Agilent ) and SureSelect GA PCR Primers for 10 cycles ., Cleanups were performed using 1 . 8\u00d7 volume of AMPure XP magnetic beads ( Agencourt ) ., For quality controls we used Invitrogens Qubit HS assay and fragment size was determined using Agilents 2100 Bioanalyzer HS DNA assay ., Exome enriched gDNA libraries were clustered on the cBot using Truseq SR cluster kit v2 . 5 using 7 pM and sequenced on the Illumina HiSeq2000 using Truseq SBS kit ., Sequence reads were aligned using bwa ( version 0 . 5 . 8c ) 16 using default options to the reference mouse genome assembly mm9 24 ., Ambiguous reads \u2013 those reads mapping to multiple locations of the genome as provided by the bwa output - were removed ( see Supplementary Dataset S3 for the alignment statistics ) ., The remaining alignments were sorted , indexed and converted to a binary and compressed format ( BAM ) and the read quality scores converted from the Illumina standard phred+64 to standard Sanger quality scores using shell scripts ., For each sequencing lane , mutations were identified using three software programs: SAMtools pileup ( version 0 . 1 . 8 ) 18 , GATK ( version 1 . 0 . 4418 ) 11 and SomaticSNiPer 19 ., For SAMtools , the author-recommend options and filter criteria were used ( http:\/\/sourceforge . net\/apps\/mediawiki\/SAMtools\/index . php ? title=SAM_FAQ; accessed September 2011 ) , including first round filtering , maximum coverage 200 ., For SAMtools second round filtering , the point mutation minimum quality was 30 ., For GATK mutation calling , we followed the author-designed best practice guidelines presented on the GATK user manual ( http:\/\/www . broadinstitute . org\/gsa\/wiki\/index . php ? title=Best_Practice_Variant_Detection_with_the_GATK_v2&oldid=5207; accessed October 2010 ) ., For each sample a local realignment around indel sites followed by a base quality recalibration was performed ., The Unified Genotyper module was applied to the resultant alignment data files ., When needed , the known polymorphisms of the dbSNP 25 ( version 128 for mm9 ) were supplied to the individual steps ., The variant score recalibration step was omitted and replaced by the hard-filtering option ., For both SAMtools and GATK , potential indels were filtered out of the results before further processing and a mutation was accepted as somatic if it is present in the data for B16 but not in the black6 sample ., Additionally , as a post filter , for each potentially mutated locus we required non-zero coverage in the normal tissue ., This is intended to sort out mutations which only look to be somatic because of a not covered locus in the black6 samples ., For SomaticSNiPer mutation calling , the default options were used and only predicted mutations with a \u201csomatic score\u201d of 30 or more were considered further ( see Supplementary Text S1 for a description of the cutoff selection ) ., For all three programs , we removed all mutations located in repetitive sequences as defined by the RepeatMasker track of the UCSC Genome Browser 26 for the mouse genome assembly mm9 ., Barcoded mRNA-seq cDNA libraries were prepared from 5 ug of total RNA using a modified version of the Illumina mRNA-seq protocol ., mRNA was isolated using SeramagOligo ( dT ) magnetic beads ( Thermo Scientific ) ., Isolated mRNA was fragmented using divalent cations and heat resulting in fragments ranging from 160\u2013200 bp ., Fragmented mRNA was converted to cDNA using random primers and SuperScriptII ( Invitrogen ) followed by second strand synthesis using DNA polymerase I and RNaseH ., cDNA was end repaired using T4 DNA polymerase , Klenow DNA polymerase and 5\u2032 phosphorylated using T4 polynucleotide kinase ., Blunt ended cDNA fragments were 3\u2032 adenylated using Klenow fragment ( 3\u2032 to 5\u2032 exo minus ) ., 3\u2032 single T-overhang Illumina multiplex specific adapters were ligated on the cDNA fragments using T4 DNA ligase ., cDNA libraries were purified and size selected at 300 bp using the E-Gel 2% SizeSelect gel ( Invitrogen ) ., Enrichment , adding of Illumina six base index and flow cell specific sequences was done by PCR using Phusion DNA polymerase ( Finnzymes ) ., All cleanups were performed using 1 . 8\u00d7 volume of Agencourt AMPure XP magnetic beads ., Barcoded RNA-seq libraries were clustered on the cBot using Truseq SR cluster kit v2 . 5 using 7 pM and sequenced on the Illumina HiSeq2000 using Truseq SBS kit ., The raw output data of the HiSeq was processed according to the Illumina standard protocol , including removal of low quality reads and demultiplexing ., Sequence reads were then aligned to the reference genome sequence 24 using bowtie 17 ., The alignment coordinates were compared to the exon coordinates of the RefSeq transcripts 27 and for each transcript the counts of overlapping alignments were recorded ., Sequence reads not aligning to the genomic sequence were aligned to a database of all possible exon-exon junction sequences of the RefSeq transcripts 27 ., The alignment coordinates were compared to RefSeq exon and junction coordinates , reads counted and normalized to RPKM ( number of reads which map per nucleotide kilobase of transcript per million mapped reads 28 ) for each transcript ., We selected SNVs for validation by Sanger re-sequencing and RNA ., SNVs were identified which were predicted by all three programs , non-synonymous and found in transcripts having a minimum of 10 RPKM ., Of these , we selected the 50 with the highest SNP quality scores as provided by the programs ., As a negative control , 44 SNVs were selected which have a FDR of 0 . 5 or more , are present in only one cell line sample and are predicted by only one mutation calling program ., 45 mutations with intermediate FDR levels were selected ., Using DNA , the selected variants were validated by PCR amplification of the regions using 50 ng of DNA ( see Supplementary Dataset S4 for the primer sequences and targeted loci ) , followed by Sanger sequencing ( Eurofins MWG Operon , Ebersberg , Germany ) ., The reactions were successful for 50 , 32 and 37 loci of positive , negative and intermediate controls , respectively ., Validation was also done by examination of the tumor RNA-Seq reads ., Random Forest Quality Score Computation: Commonly-used mutation calling algorithms ( 11 , 18 , 19 ) output multiple scores , which all are potentially influential for the quality of the mutation call ., These include - but are not limited to - the quality of the base of interest as assigned by the instrument , the alignment quality and number of reads covering this position or a score for the difference between the two genomes compared at this position ., For the computation of the false discovery rate we require an ordering of mutations , however this is not directly feasible for all mutations since we might have contradicting information from the various quality scores ., We use the following strategy to achieve a complete ordering ., In a first step , we apply a very rigorous definition of superiority by assuming that a mutation has better quality than another if and only if it is superior in all categories ., So a set of quality properties S\\u200a= ( s1 , \u2026 , sn ) is preferable to T\\u200a= ( t1 , \u2026 , tn ) , denoted by S>T , if si>ti for all i\\u200a=\\u200a1 , \u2026 , n ., We define an intermediate FDR ( IFDR ) as follows However , we regard the IFDR only as an intermediate step since in many closely related cases , no comparison is feasible and we are thus not benefitting from the vast amount of data available ., Thus , we take advantage of the good generalization property of random forest regression 20 and train a random forest as implemented in R ( 29 , 30 ) ., For m input mutations with n quality properties each , the value range for each property was determined and up to p values were sampled with uniform spacing out of this range; when the set of values for a quality property was smaller than p , this set was used instead of the sampled set ., Then each possible combination of sampled or selected quality values was created , which resulted in a maximum of pn data points in the n-dimensional quality space ., A random sample of 1% of these points and the corresponding IFDR values were used as predictor and response , respectively , for the random forest training ., The resulting regression score is our generalized quality score Q; it can be regarded as a locally weighted combination of the individual quality scores ., It allows direct , single value comparison of any two mutations and the computation of the actual false discovery rate: For the training of the random forest models used to create the results for this study , we calculate the sample IFDR on the somatic mutations of all samples before selecting the random 1% subset ., This ensures the mapping of the whole available quality space to FDR values ., We used the quality properties \u201cSNP quality\u201d , \u201ccoverage depth\u201d , \u201cconsensus quality\u201d and \u201cRMS mapping quality\u201d ( SAMtools , p\\u200a=\\u200a20 ) ; \u201cSNP quality\u201d , \u201ccoverage depth\u201d , \u201cVariant confidence\/unfiltered depth\u201d and \u201cRMS mapping quality\u201d ( GATK , p\\u200a=\\u200a20 ) ; or SNP quality\u201d , \u201ccoverage depth\u201d , \u201cconsensus quality\u201d , \u201cRMS mapping quality\u201d and \u201csomatic score\u201d ( SomaticSNiPer , p\\u200a=\\u200a12 ) , respectively ., The different values of p ensure a set size of comparable magnitude ., To acquire the \u201csame vs . same\u201d and \u201csame vs . different\u201d data when calculating the FDRs for a given set of mutations , we use all variants generated by the different programs without any additional filtering ., Common coverage computation: The number of possible mutation calls can introduce a major bias in the definition of a false discovery rate ., Only if we have the same number of possible locations for mutations to occur for our tumor comparison and for our \u201csame vs . same\u201d comparison , the number of called mutations is comparable and can serve as a basis for a false discov","headings":"Introduction, Results, Discussion, Methods","abstract":"Next generation sequencing ( NGS ) has enabled high throughput discovery of somatic mutations ., Detection depends on experimental design , lab platforms , parameters and analysis algorithms ., However , NGS-based somatic mutation detection is prone to erroneous calls , with reported validation rates near 54% and congruence between algorithms less than 50% ., Here , we developed an algorithm to assign a single statistic , a false discovery rate ( FDR ) , to each somatic mutation identified by NGS ., This FDR confidence value accurately discriminates true mutations from erroneous calls ., Using sequencing data generated from triplicate exome profiling of C57BL\/6 mice and B16-F10 melanoma cells , we used the existing algorithms GATK , SAMtools and SomaticSNiPer to identify somatic mutations ., For each identified mutation , our algorithm assigned an FDR ., We selected 139 mutations for validation , including 50 somatic mutations assigned a low FDR ( high confidence ) and 44 mutations assigned a high FDR ( low confidence ) ., All of the high confidence somatic mutations validated ( 50 of 50 ) , none of the 44 low confidence somatic mutations validated , and 15 of 45 mutations with an intermediate FDR validated ., Furthermore , the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies , including ROC curves and AUC metrics ., Using the HiSeq 2000 , single end 50 nt reads from replicates generate the highest confidence somatic mutation call set .","summary":"Next generation sequencing ( NGS ) has enabled unbiased , high throughput discovery of genetic variations and somatic mutations ., However , the NGS platform is still prone to errors resulting in inaccurate mutation calls ., A statistical measure of the confidence of putative mutation calls would enable researchers to prioritize and select mutations in a robust manner ., Here we present our development of a confidence score for mutations calls and apply the method to the identification of somatic mutations in B16 melanoma ., We use NGS exome resequencing to profile triplicates of both the reference C57BL\/6 mice and the B16-F10 melanoma cells ., These replicate data allow us to formulate the false discovery rate of somatic mutations as a statistical quantity ., Using this method , we show that 50 of 50 high confidence mutation calls are correct while 0 of 44 low confidence mutations are correct , demonstrating that the method is able to correctly rank mutation calls .","keywords":"genome sequencing, genomics, genetic mutation, genetics, biology, computational biology, genetics and genomics","toc":null} +{"Unnamed: 0":2130,"id":"journal.pcbi.1002555","year":2012,"title":"Minimum Free Energy Path of Ligand-Induced Transition in Adenylate Kinase","sections":"Biological functions of proteins are mediated by dynamical processes occurring on complex energy landscapes 1 ., These processes frequently involve large conformational transitions between two or more metastable states , induced by an external perturbation such as ligand binding 2 ., Time scales of the conformational transition are frequently of order microseconds to seconds ., To characterize such slow events in molecular dynamics ( MD ) trajectories , the free energy profile or the potential of mean force ( PMF ) along a reaction coordinate must be identified ., In particular , the identification of the transition state ensemble ( TSE ) enables the barrier-height to be evaluated , and the correct kinetics would be reproduced if there is only a single dominant barrier ., However , for proteins with many degrees of freedom , finding an adequate reaction coordinate and identifying the TSE is a challenging task placing high demands on computational resources ., The finite-temperature string method 3 , 4 , and the on-the-fly string method 5 find a minimum free energy path ( MFEP ) from a high-dimensional space ., Given a set of collective variables describing a conformational change , the MFEP is defined as the maximum likelihood path along the collective variables ., The MFEP is expected to lie on the center of reactive trajectories and contains only important transitional motions 4 ., Furthermore , since the MFEP approximately orthogonally intersects the isocommittor surfaces ( the surfaces of constant committor probability in the original space ) 4 , the TSE can be identified as the intersection with the isocommittor surface with probability of committing to the product ( or the reactant ) =\\u200a1\/2 ., The methods and MFEP concepts have been applied to various molecular systems 4\u20136 including protein conformational changes 7\u20139 ., With regard to high-dimensional systems like proteins , the quality of the MFEP ( whether it satisfies the above-mentioned properties ) is particularly sensitive to choice of the collective variables ., The collective variables should be selected such that their degrees of freedom are few enough to ensure a smooth free energy surface; at the same time they should be sufficiently many to approximate the committor probability 4 , 9 ., To resolve these contrary requirements , effective dimensional reduction is required ., Large conformational transitions of proteins , frequently dominated by their domain motions , can be well approximated by a small number of large-amplitude principal modes 2 , 10 ., This suggests that the use of the principal components may be the best choice for approximating the committor probability with the fewest number of variables for such large conformational transitions involving domain motions ., A further advantage is the smoothness of the free energy landscape in the space of the large-amplitude principal components ., If the curvature of the MFEP is large , the MFEP may provide a poor approximation to the isocommittor surface since the flux can occur between non-adjacent structures along the path 9 ., The selection of the large-amplitude principal components as the collective variables would maintain the curvature of the MFEP sufficiently small ., Here , we conducted preliminary MD simulations around the two terminal structures of the transition and performed a principal component analysis to obtain the principal components ( see Materials and Methods for details ) ., Following selection of a suitable MFEP , determination of the PMF and characterization of the physical quantities along the MFEP are needed to understand an in-depth mechanism of the transition ., Although the finite-temperature string method yields a rigorous estimate of the gradient of the PMF under a large coupling constant with the collective variables 3\u20135 ( see Materials and Methods ) , errors in the estimates of the gradients and in the tangential directions of the pathway tend to accumulate during the integration process ., To accurately quantify the PMF and the averages of various physical quantities in a multi-dimensional space , we utilized another statistical method , the multi-state Bennett acceptance ratio ( MBAR ) method 11 , which provides optimal estimates of free energy and other average quantities along the MFEP ., Here , we applied the above proposed methods to the conformational change in Escherichia coli adenylate kinase ( AK ) , the best-studied of enzymes exhibiting a large conformational transition 12\u201323 ., AK is a ubiquitous monomeric enzyme that regulates cellular energy homeostasis by catalyzing the reversible phosphoryl transfer reaction: ATP+AMP\u21942ADP ., According to the analysis of the crystal structures by the domain motion analysis program DynDom 24 , AK is composed of three relatively rigid domains ( Figure 1 ) ; the central domain ( CORE: residues 1\u201329 , 68\u2013117 , and 161\u2013214 ) , an AMP-binding domain ( AMPbd: 30\u201367 ) , and a lid-shaped ATP-binding domain ( LID: 118\u2013167 ) ., Inspection of the crystal structures suggests that , upon ligand binding , the enzyme undergoes a transition from the inactive open form to the catalytically competent closed structure 25 ( Figure 1 ) ., This transition is mediated by large-scale closure motions of the LID and AMPbd domains insulating the substrates from the water environment , while occluding some catalytically relevant water molecules ., The ATP phosphates are bound to the enzyme through the P-loop ( residues 7\u201313 ) , a widely-distributed ATP-binding motif ., The interplay between AKs dynamics and function has been the subject of several experimental studies ., 15N NMR spin relaxation studies have revealed that the LID and AMPbd domains fluctuate on nanosecond timescales while the CORE domain undergoes picosecond fluctuations 12 , 13 ., The motions of these hinge regions are highly correlated with enzyme activity 14 ., In particular , the opening of the LID domain , responsible for product release , is thought to be the rate-limiting step of the catalytic turnover 14 ., Recent single-molecule F\u00f6rster resonance energy transfer ( FRET ) experiments have revealed that the closed and open conformations of AK exist in dynamic equilibrium even with no ligand present 15 , 16 , and that the ligands presence merely changes the populations of open and closed conformations ., This behavior is reminiscent of the population-shift mechanism 26 rather than the induced-fit model 27 , in which structural transitions occur only after ligand binding ., The population-shift like behaviour of AK has also been supported by simulation studies 17\u201320 ., Lou and Cukier 17 , Kubitzki and de Groot 18 , and Beckstein et al . 19 employed various enhanced sampling methods to simulate ligand-free AK transitions ., Arora and Brooks 20 applied the nudged elastic band method in the pathway search for both ligand-free and ligand-bound forms ., These studies showed that , while the ligand-free form samples conformations near the closed structure 17\u201320 , ligand binding is required to stabilize the closed structure 20 ., Despite the success of these studies based on all-atom level models , atomistic details of the transition pathways , including the structures around the TSE , have not been fully captured yet ., In this study , we successfully evaluated the MFEP for both ligand-free and ligand-bound forms of AK using the on-the-fly string method , and calculated the PMF and the averages of various physical quantities using the MBAR method ., Our analysis elucidates an in-depth mechanism of the conformational transition of AK ., The MFEPs for apo and holo-AKs , and their PMFs , were obtained from the string method and the MBAR method , respectively ( see Videos S1 and S2 ) ., The MFEPs were calculated using the same 20 principal components selected for the collective variables ., The holo-AK calculations were undertaken with the bisubstrate analog inhibitor ( Ap5A ) as the bound ligand without imposing any restraint on the ligand ., Figures 2A and 2B show the MBAR estimates of the PMFs along the images of the MFEP ( the converged string at 12 ns in Figures 2A and 2B ) for apo and holo-AK , respectively ., Here , the images on the string are numbered from the open ( ; PDBid: 4ake 28 ) to the closed conformation ( ; PDBid: 1ake 29 ) ., These terminal images were fixed during the simulations to enable sampling of the conformations around the crystal structures ., In the figures , the convergence of the PMF in the string method process is clearly seen in both systems ., Convergence was also confirmed by the error estimates ( Figure S1 ) , and by the root-mean-square displacement ( RMSD ) of the string from its initial path ( Figure S2 ) ., The PMF along the MFEP reveals a broad potential well on the open-side conformations of apo-AK , suggesting that the open form of AK is highly flexible 20 ., This broad well is divided into two regions , the fully open ( ) and partially closed states ( , encircled ) by a small PMF barrier ., In holo-AK ( Figure 2B ) , the MFEP exhibits a single substantial free energy barrier ( ) between the open and closed states , which does not appear in the initial path ., This barrier will be identified as the transition state below ., It is shown in the PMF along the MFEP that the closed form ( tightly binding the ligand ) is much more stable than the open form with loose binding to the ligand ( large fluctuations of the ligand will be shown later ) ., To characterize the MFEP in terms of the domain motions , the MFEP was projected onto a space defined by two distances from the CORE domain , the distance to the LID domain and the distance to the AMPbd domain ( the distance between the mass centers of atoms for the two domains; Figures 2C and 2D ) ., The PMF was also projected onto this space ., The comparison of the two figures shows that ligand binding changes the energy landscape of AK , suggestive that this is not a simple population-shift mechanism ., In apo-AK , the motions of the LID and AMPbd domains are weakly correlated , reflecting the zipper-like interactions on the LID-AMPbd interface 19 ., The MFEP clearly indicates that the fully closed conformation ( ) involves the closure of the LID domain followed by the closure of the AMPbd domain ., The higher flexibility of the LID domain has been reported in previous studies 17 , 19 , 20 ., In holo-AK , the pathway can be described by two successive scenarios , that is , the LID-first-closing followed by the AMPbd-first-closing ., In the open state ( ) , the MFEP is similar to that of apo-AK , revealing that LID closure occurs first ., In the closed state ( ) , however , the AMPbd closure precedes the LID closure ., This series of the domain movements was also identified by the domain motion analysis program DynDom 24 ( Figure S3 ) ., It is known in the string method that the convergence of the pathway is dependent on the initial path ., In order to check whether the MFEP obtained here is dependent on the initial path or not , we conducted another set of the calculations for apo-AK by using a different initial path , which has an AMPbd-first-closing pathway , opposed to the LID-first-closing pathway shown above ., If the LID and AMPbd domains move independently of each other , it is expected that LID-first-closing and AMPbd-first-closing pathways are equally stable ., Despite this initial setup , however , our calculation again showed the convergence toward the LID-first-closing pathway ( see Figure S4 ) ., As described above , this tendency of the pathways would be due to the reflection of the highly flexible nature of the LID domain ., Furthermore , in order to check whether the samples around the MFEP are consistent with the experiments , we compared the PMF as a function of the distance between the C\u03b1 atoms of Lys145 and Ile52 with the results of the single-molecule FRET experiment by Kern et al . 16 ( see Figure S5 ) ., The PMF was calculated by using the samples obtained by the umbrella samplings around the MFEP ., In the figure , the stable regions of the PMF for holo-AK are highly skewed toward the closed form , and some population toward the partially closed form was also observed even for apo-AK , which is consistent with the histogram of the FRET efficiency 16 ., To more clearly illustrate the energetics along the MFEP in terms of the domain motions , we separately plot the PMF as a function of the two inter-domain distances defined above ( Figures 3A and 3B ) ., We observe that the PMF of apo-AK has a double-well profile for the LID-CORE distance ( indicated by the blue line in Figure 3A ) , whereas the PMF in terms of the AMPbd-CORE distance is characterized by a single-well ( Figure 3B ) ., The single-molecule FRET experiments monitoring the distances between specific residue pairs involving the LID domain ( LID-CORE ( Ala127-Ala194 ) 15 and LID-AMPbd ( Lys145- Ile52 ) 16 ) revealed the presence of double-well profiles in the ligand-free form ., On the other hand , an electron transfer experiment probing the distance between the AMPbd and CORE domains ( Ala55-Val169 ) 30 showed only that the distance between the two domains decreased upon ligand binding ., Considering the PMF profiles in the context of these experimental results , we suggest that the partially closed state ( ) in apo-AK ( Figure 2A ) can be ascribed to the LID-CORE interactions but not to the AMPbd-CORE interactions ., To elucidate the origin of the stability of the partially closed state , we monitored the root mean square fluctuations ( RMSF ) of the atoms along the MFEP ( see Materials and Methods for details ) ., Figures 3C and 3D show the RMSF along the MFEP for apo and holo-AK , respectively ., In apo-AK ( Figure 3C ) , large fluctuations occur in the partially closed state ( ) around the LID-CORE hinge regions ( residue 110\u2013120 , and 130\u2013140 ) and the P-loop ( residue 10\u201320 ) ., It has been proposed , in the studies of AK using coarse-grained models , that \u201ccracking\u201d or local unfolding occurs due to localized strain energy , and that the strained regions reside in the LID-CORE hinge and in the P-loop 21 , 23 ., Our simulation using the all-atom model confirmed the existence of \u201ccracking\u201d in the partially closed state , and provided an atomically detailed picture of this phenomenon ., The average structures around the partially closed state revealed that , in the open state , a highly stable Asp118-Lys136 salt bridge is broken by the strain induced by closing motion around ( Figure S6A ) ., This salt bridge has been previously proposed to stabilize the open state while imparting a high enthalpic penalty to the closed state 18 ., Breakage of the salt bridge releases the local strain and the accompanying increases in fluctuation may provide compensatory entropy to stabilize the partially closed state ., A similar partially closed state of the LID domain was also found by the work of Lou and Cukier 31 in which they performed all-atom MD simulation of apo-AK at high temperature ( 500 K ) condition ., In holo-AK , both of the LID-CORE and AMPbd-CORE distances exhibit double-well profiles ( indicated by the red lines in Figures 3A and 3B ) , separating the closed from the open state ., The breakage of the 118\u2013136 salt bridge at around is not accompanied by \u201ccracking\u201d of the hinge region ( Figure 3D ) ., Instead , the hinge region is stabilized by binding of ATP ribose to Arg119 and His134 ( Figure S6B ) , leading to a smooth closure of the LID domain ., This suggests that one role of the salt bridge breakage is rearrangement of the molecular interactions to accommodate ATP-binding 32 ., P-loop fluctuations are also suppressed in holo-AK ( Figure 3D ) ., Consistent with our findings , reduced backbone flexibilities in the presence of Ap5A were reported in the above-mentioned NMR study 13 ., The origin of the double-well profile in holo-AK was investigated via the ligand-protein interactions ., The motion of the ligand along the MFEP was firstly analyzed by focusing on the AMP adenine dynamics , since the release of the AMP moiety from the AMP-binding pocket was observed in the open state ., It is again emphasized that the ligand is completely free from any restraint during the simulations ., PCA was performed for the three-dimensional Cartesian coordinates of the center of mass of AMP adenine , and the coordinates were projected onto the resultant 1st PC in Figure 4A ., The AMP adenine is observed to move as much as 10 \u00c5 in the open state ( ; Figure 4B ) , while it is confined to a narrow region of width 1\u20132 \u00c5 ( the binding pocket ) in the closed state ( ) ., Such a reduction of the accessible space of the AMP adenine might generate a drastic decrease in entropy or an increase in the PMF barrier of the open-to-closed transition ., Furthermore , close inspection of the PMF surface reveals the existence of a misbinding event at ( Figure 4B ) , in which the AMP ribose misbinds to Asp84 in the CORE domain , and is prevented from entering the AMP-binding pocket ., This event further increases the barrier-height of the transition ., The MFEP revealed that AMP adenine enters the AMP-binding pocket around , as indicated by a rapid decrease in the accessible area ( Figure 4A ) ., This event is well correlated with the position of the PMF barrier along the MFEP ( Figure 2B ) ., This coincidence between the binding process and the domain closure suggests that the two processes are closely coupled ., Before analyzing the situation in detail , however , it is necessary to assess whether the observed PMF barrier around ( Figure 2B ) corresponds to a TSE , because the PMF barrier is not necessarily a signature of dynamical bottleneck in high-dimensional systems 33 ., TSE validation is usually performed with a committor test 4 , 7 , 9 , 33 ., In principle , the committor test launches unbiased MD simulations from structures chosen randomly from the barrier region , and tests whether the resultant trajectories reach the product state with probability 1\/2 ., Here , since limited computational resources precluded execution of a full committor test , 40 unbiased MD simulations of 10 ns were initiated from each of , 33 or 34 , a total of 120 simulations or 1 . 2 , and the distributions of the final structures after 10 ns were monitored 9 ., Figure 5A shows the binned distributions of the final structures assigned by index of the nearest MFEP image ( the blue bars ) ., When the simulations were initiated from the image at ( ) , the distribution biases to the open form-side ( the closed form-side ) relative to the initial structures ., On the other hand , when starting from the image at , the distribution is roughly symmetric around the initial structures ., This result suggests that the TSE is located at ., In other words , it was validated that the TSE was successfully captured in the MFEP , and at the same time , the collective variables were good enough to describe the transition ., A close inspection of the structures around the PMF barrier supported our insufficient committer test and revealed the mechanism of the ligand-induced domain closure ., Figure 5B shows the hydrogen bond ( H-bond ) patterns between the ligand and the protein observed in the average structures at ( before the TSE ) and ( after the TSE ) ., At , Thr31:OG1 ( AMPbd ) forms an H-bond with N7 of AMP adenine , and Gly85:O ( CORE ) forms one with adenine N6 ., These two H-bonds mediate the hinge bending of the AMPbd-CORE domains ., In addition , the H-bond between Gly85:O and adenine N6 helps the enzyme to distinguish between AMP and GMP; GMP lacks an NH2 group in the corresponding position of AMP 34 ., This means that the specificity of AMP-binding operates at an early stage of the ligand binding process ., At , the AMPbd-CORE distance becomes smaller than that at , which allows the formation of 3 additional H-bonds with the ligand: Gln92:OE1 ( CORE ) and adenine N6 , Lys57:O ( AMPbd ) and the ribose O2 , and Arg88:NH1 ( CORE ) and O1 of AMP ., The resulting rapid enthalpy decrease stabilizes the closed conformation ., Gln92:OE1 is also important in establishing AMP specificity; GMP lacks the counterpart atom , adenine N6 ., The strictly conserved Arg88 residue is known to be crucial for positioning AMP so as to suitably receive a phosphate group from ATP 35 ., With regard to the AMPbd closure , our result suggests that Arg88 ( CORE ) , in conjunction with Lys57 ( AMPbd ) , works to block adenine release from the exit channel and to further compact the AMPbd-CORE domains ., A remaining question is how closure of the LID domain follows that of the AMPbd domain ., Unlike the AMP-binding pocket , the ATP-binding sites , including the P-loop , are surrounded by charged residues , which attract interfacial water molecules ., Upon LID closure , most of these water molecules will be dehydrated from the enzyme , but some may remain occluded ., To characterize the behaviors of these water molecules , the 3D distribution function of their oxygen and hydrogen constituents were calculated along the MFEP using the MBAR method ( see Materials and Methods ) ., Figures 6A , 6B , and 6C display the isosurface representations of the 3D distribution functions around the P-loop at , 41 , and 42 , respectively ., The surfaces show the areas in which the atoms are distributed four times as probably as in the bulk phase ., At , the ATP phosphates are not yet bound to the P-loop because an occluded water molecule ( encircled ) is wedged between the phosphate and the P-loop , inhibiting binding of ATP and and bending of the side-chain of \u201cinvariant lysine\u201d ( Lys13 ) , a residue that plays a critical role in orienting the phosphates to the proper catalytic position 36 ., This occluded water molecule may correspond to that found in the crystal structure of apo-AK ( PDBid: 4ake ) ( Figure 6D , encircled ) ., Figures 6B and 6C clearly demonstrate that , upon removal of this water molecule , the ATP phosphates begin binding to the P-loop ., These observations were confirmed by plots of the PMF surface mapped onto a space defined by the LID-CORE distance versus the index of image ( Figure 6F ) , which shows that the PMF decreases discontinuously upon dehydration followed by LID domain closure ., Interestingly , compared with the crystal structure ( PDBid: 1ake ) ( Figure 6E ) , the position of the ATP moiety is shifted to the AMP side by one monophosphate unit ., This may be a consequence of early binding of the AMP moiety ., At a later stage ( around ) , this mismatch was corrected to form the same binding mode as observed in the crystal structure ., This reformation of the binding mode may be induced by the tight binding of ATP adenine to the LID-CORE domains , and will not occur in the real enzymatic system containing ATP and AMP instead of the bisubstrate analog inhibitor , Ap5A ., In this study , we have applied the on-the-fly string method 5 and the MBAR method 11 to the conformational change of an enzyme , adenylate kinase , and successfully obtained the MFEP ( Figures 2A and 2B ) ., The MFEP yielded a coarse-grained description of the conformational transitions in the domain motion space ( Figures 2C and 2D ) ., At the same time , the atomistic-level characterization of the physical events along the MFEP provided a structural basis for the ligand-binding and the domain motions ( Figures 3\u20136 ) ., This kind of multiscale approach used here is expected to be useful generally for complex biomolecules since the full space sampling can be avoided in an efficient manner ., We have shown that in the TSE of holo-AK , the conformational transition is coupled to highly specific binding of the AMP moiety ., Our results have been validated by unbiased MD simulations ., The mechanism of the AMPbd domain closure is consistent with that proposed by the induced-fit model ( Figure S7A ) , and follows a process similar to that of protein kinase A , previously investigated by a coarse-grained model 32:, ( i ) the insertion of the ligand into the binding cleft initially compacts the system;, ( ii ) additional contacts between the ligand and non-hinge region further compact the system ., The closure of the LID domain is more complicated ( Figure S7B ) ., It was shown that apo-AK can exist in a partially closed state , stabilized by the \u201ccracking\u201d of the LID-CORE hinge and the P-loop , even with no ligand present ., The cracking of the hinge region enables rearrangement of molecular interactions for ATP-binding , which induces a smooth bending of the hinge ., Along with the LID closure , ATP is conveyed into the P-loop , with removal of an occluded water molecule ., The closure of the LID domain follows the \u201cpopulation-shift followed by induced-fit\u201d scenario discussed in Ref ., 37 , in which a transient local minimum is shifted toward the closed conformation upon ligand binding ., This two-step process of the LID domain closure is similar to the two-step mechanism reported in recent simulation studies of the Lysine- , Arginine- , Ornithine-binding ( LAO ) protein 38 and the maltose binding protein 39 ., In holo-AK , AMPbd domain closure occurs early ( at ) , while the LID domain closes at later stages ( ) ., An interesting question is whether an alternative pathway is possible in the presence of the real ligands ( ATP and AMP ) instead of Ap5A ., Ap5A artificially restrains the distance between the ATP and AMP moieties ., During the process with real ligands , the dynamics of the LID and AMPbd domains is expected to be less correlated ., Nevertheless , for full closing of the LID domain , we conjecture that the AMPbd domain should be closed first , enabling the interactions on the LID-AMPbd interface to drive the dehydration around the P-loop ., This suggests that full recognition of ATP by the LID-CORE domains occurs at a later stage of the conformational transition ., This conjecture may be related to the lower specificity of E . coli AK for ATP compared with AMP 34 ., Nonspecific AMP-binding to the LID domain has previously been suggested to explain the observed AMP-mediated inhibition of E . coli AK at high AMP concentrations 40 ., A missing ingredient in the present study is the quantitative decomposition of the free energy in each event , such as the ligand binding and the interactions on the LID-AMPbd interface ., For enhanced understanding of the conformational change , our methods could be complemented by the alchemical approach 41 ., Varying the chemical compositions of the system during the conformational change would enable us to elucidate the effects of ligand binding , cracking , and dehydration in a more direct manner ., We prepared three systems from the following initial structures:, ( i ) \u201capo-open system\u201d , X-ray crystal structure of the open-form without ligand ( PDBid: 4ake 28 ) ,, ( ii ) \u201cholo-closed system\u201d , crystal structure of closed-form with Ap5A ( PDBid: 1ake 29 ) ,, ( iii ) \u201capo-closed system\u201d , structure created by removing Ap5A from the holo-closed system ., The protonation states of the titratable groups at pH 7 were assigned by PROPKA 42 , implemented in the PDB2PQR program package 43 , 44 ., The apo-open and apo-closed systems yielded identical assignments , which were used also for the holo-closed system ., These systems were solvated in a periodic boundary box of water molecules using the LEaP module of the AMBER Tools ( version 1 . 4 ) 45 ., A padding distance of 12 \u00c5 from the protein surface was used for the apo-open system ., For the apo-closed and holo-closed systems , a longer padding distance of 20 \u00c5 was used to avoid interactions with periodic images during the closed-to-open transition ., Two Na+ ions were added to neutralize the closed-apo and open-apo systems , while seven Na+ ions were required to neutralize the closed-holo system ., The systems were equilibrated under the NVT condition at 300 K by the following procedure: First , the positions of solvent molecules and hydrogen atoms of the protein ( and Ap5A ) were relaxed by 1 , 000 step minimization with restraint of non-hydrogen atoms ., Under the same restraints , the system was gradually heated up to 300 K over 200 ps , followed by 200 ps MD simulation under the NVT condition at 300 K while gradually decreasing the restraint forces to zero , but keeping the restraints on atoms needed in the string method ., The system was further equilibrated by 200 ps MD simulation under the NPT condition ( 1 atm and 300 K ) , adjusting the density of the water environment to an appropriate level ., The ensemble was finally switched back to NVT , and subjected to additional 200 ps simulation at 300 K , maintaining the restraints ., The equilibration process was conducted using the Sander module of Amber 10 45 , with the AMBER FF03 force field 46 for the protein , and TIP3P for water molecules 47 ., The parameters for Ap5A were generated by the Antechamber module of AMBER Tools ( version 1 . 4 ) 45 using the AM1-BCC charge model and the general AMBER Force Field ( GAFF ) 48 ., Covalent bonds involving hydrogen atoms were constrained by the SHAKE algorithm 49 with 2 fs integration time step ., Long-range electrostatic interactions were evaluated by the particle mesh Ewald method 50 with a real-space cutoff of 8 \u00c5 ., The Langevin thermostat ( collision frequency 1 ps\u22121 ) was used for the temperature control ., The production runs , including the targeted MD , the on-the-fly string method , the umbrella sampling , and the committor test , were performed with our class library code for multicopy and multiscale MD simulations ( which will soon be available ) T . Terada et al . , unpublished , using the same parameter set described above ( unless otherwise noted ) ., Protein structures and the isosurfaces of solvent density were drawn with PyMOL ( Version 1 . 3 , Schr\u00f6dinger , LLC ) ., The calculations were performed using the RIKEN Integrated Cluster of Clusters ( RICC ) facility ., It has been shown that normal modes or principal modes provide a suitable basis set for representing domain motions of proteins 2 , 10 ., In particular , it has been argued that the conformational change in AK can be captured by a set of principal modes of apo-AK 31 , 51 ., In this study , we have defined the collective variables for the on-the-fly string method using the principal components of apo-AK ., The PCA was carried out in the following manner: After the equilibration process , 3 ns MD simulations were executed at 300 K without restraint for both apo-open and apo-closed systems ., The obtained MD snapshots from both systems were combined in a single PCA 52 , removing the external contributions by iteratively superimposing them onto the average coordinates 53 , 54 ., The PCA was then conducted for the Cartesian coordinates of the atoms ., It was found that the first principal mode representing the largest-amplitude merely represents the difference between the open and closed conformations ., The fluctuations in the two structures were expressed in the principal modes of smaller amplitudes ., The cumulative contributions of these modes ( ignoring the first ) are shown in Figure S8 ., As expected , the principal modes represent the collective motions of the LID and AMPbd domains ( Figure S9 ) ., The first 20 principal components ( 82% cumulative contribution , ignoring that of the first ) were adopted as the collective variable of the string method ., These components were sufficient to describe the motions of three domains in AK for which at least degrees of freedom are required in the rigid-body approximation ., The additional eight degrees of freedom were included as a buffer for possible errors in the estimation of the principal modes ., The sum of the canonical correlation coefficients between the two sets of the 20 principal components , one calculated using the samples of the first half ( 0\u20131 . 5 ns ) snapshots and the other using the last half ( 1 . 5\u20133 ns ) snapshots , was 11 . 8 ( \u223c12 ) , suggesting that the subspace of the domain motions was converg","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Large-scale conformational changes in proteins involve barrier-crossing transitions on the complex free energy surfaces of high-dimensional space ., Such rare events cannot be efficiently captured by conventional molecular dynamics simulations ., Here we show that , by combining the on-the-fly string method and the multi-state Bennett acceptance ratio ( MBAR ) method , the free energy profile of a conformational transition pathway in Escherichia coli adenylate kinase can be characterized in a high-dimensional space ., The minimum free energy paths of the conformational transitions in adenylate kinase were explored by the on-the-fly string method in 20-dimensional space spanned by the 20 largest-amplitude principal modes , and the free energy and various kinds of average physical quantities along the pathways were successfully evaluated by the MBAR method ., The influence of ligand binding on the pathways was characterized in terms of rigid-body motions of the lid-shaped ATP-binding domain ( LID ) and the AMP-binding ( AMPbd ) domains ., It was found that the LID domain was able to partially close without the ligand , while the closure of the AMPbd domain required the ligand binding ., The transition state ensemble of the ligand bound form was identified as those structures characterized by highly specific binding of the ligand to the AMPbd domain , and was validated by unrestrained MD simulations ., It was also found that complete closure of the LID domain required the dehydration of solvents around the P-loop ., These findings suggest that the interplay of the two different types of domain motion is an essential feature in the conformational transition of the enzyme .","summary":"Conformational transitions of proteins have been postulated to play a central role in various protein functions such as catalysis , allosteric regulation , and signal transduction ., Among these , the relation between enzymatic catalysis and dynamics has been particularly well-studied ., The target molecule in this study , adenylate kinase from Escherichia coli , exists in an open state which allows binding of its substrates ( ATP and AMP ) , and a closed state in which catalytic reaction occurs ., In this molecular simulation study , we have elucidated the atomic details of the conformational transition between the open and the closed states ., A combined use of the path search method and the free energy calculation method enabled the transition pathways to be traced in atomic detail on micro- to millisecond time scales ., Our simulations revealed that two ligand molecules , AMP and ATP , play a distinctive role in the transition scenario ., The specific binding of AMP into the hinge region occurs first and creates a bottleneck in the transition ., ATP-binding , which requires the dehydration of an occluded water molecule , is completed at a later stage of the transition .","keywords":"computational chemistry, molecular dynamics, biophysic al simulations, chemistry, biology, computational biology","toc":null} +{"Unnamed: 0":687,"id":"journal.pcbi.1000177","year":2008,"title":"Top-Down Analysis of Temporal Hierarchy in Biochemical Reaction Networks","sections":"The network of interactions that occur between biological components on a range of various spatial and temporal scales confer hierarchical functionality in living cells ., In order to determine how molecular events organize themselves into coherent physiological functions , in silico approaches are needed to analyze how physiological functions emerge from the evolved temporal structure of networks ., Time scale decomposition is a well-established , classical approach to dissecting network dynamics and there is a notable history of analyzing the time scale hierarchy in metabolic networks and matching the events that unfold on each time scale with a physiological function 1\u20136 ., This approach enables the identification of the independent , characteristic time scales for a dynamic system ., In particular it has been possible to decompose a cell-scale kinetic model of the human red blood cell in time to show how its key metabolic demands are met through a dynamic structure-function relationship ., The underlying principle is one of aggregation of concentration variables into \u2018pools\u2019 of concentrations that move in tandem on slower time scales 5 , 7 ., The dynamics of biological networks characteristically span large time scales ( 8 to 10 orders of magnitude ) , which contributes to the challenge of analyzing and interpreting related models ., However , there is structure in this dynamic hierarchy of events , particularly in biochemical networks in which the fastest motions generally correspond to the chemical equilibria between metabolites , and the slower motions reflect more physiologically relevant transformations ., Appreciation of this observation can result in elucidating structure from the network and simplifying the interactions ., The reduction in dynamic dimensionality is based on such pooling and the analysis of pooling is focused in the underlying time scale hierarchy and its determinants ., Understanding the time scale hierarchy and pooling structure of these networks is critical to understanding network behavior and simplifying it down to the core interactions ., Top-down studies of dynamic characteristics of networks begin with fully developed kinetic models that are formal representations of large amounts of data about the chemistry and kinetics component interactions ., Network properties can be studied by numerical simulations ( that are condition-specific ) or by analysis ( that often yield general model properties ) of the model equations ., Since comprehensive numerical simulation studies become intractable for larger networks and the identification of general model properties are needed for the judicious simplification of models , there is a need for analysis based methods in order to characterize properties of dynamic networks ., In this study we present an in silico analysis method to determine pooling of variables in complex dynamic models of biochemical reaction networks ., This method is used to study metabolic network models and allows us to identify and analyze pool formation resulting from the underlying stoichiometric , thermodynamic , and kinetic properties ., The models studied here exhibit a significant span of time scales ( Table 1 ) ., A hierarchy pool formation on different time scales was found in all networks based on the calculation of all pair wise \u03d1ij ( k ) in the models ( Figures 1C and 2 ) ., The results can be presented in a symmetric correlation tiled array , where each entry can be used to represent k for a pair of concentrations ., Figure 3 shows the result of such an array for the human red cell ., Since the array is symmetric we can display both k and the modal coefficient ratio in the pool ( xi\/xj ) for each pair of concentrations; thus The time scale ( k ) for the formation of pools and the ratio between a pair of concentrations are functions of three factors: network stoichiometry ( or topology ) , thermodynamics , and kinetic properties of the transformations in the network ., Viewing the dynamics of the network in terms of the modal matrix and the pair-wise concentration correlations on progressing time scales enables one to consider the questions of ( A ) the thermodynamic versus kinetic control of concentrations within the whole network and ( B ) the delineation of kinetic versus topological decoupling in networks ., The method developed above was developed , tested , and implemented in Mathematica ( Wolfram Research , Chicago , IL ) version 5 . 2 ., The models analyzed herein: the model of human red cell metabolism 20\u201322 , human folate metabolism 23 , and yeast glycolysis 24 were implemented in Mathematica ., For each model , a stable steady state was identified by integrating the equations over time until the concentration variables no longer changed ( error <1\u00d710\u221210 , see Table S1 ) ., The Jacobian was then calculated symbolically at that steady state condition ., Temporal decomposition was carried out as described in the Results\/Discussion section ., Briefly for a general case , a similarity transformation 8 of a square matrix , A , is given by A\\u200a=\\u200aD\u039bD\u22121 in which D is invertible ( by definition ) and \u039b is a diagonal matrix ., D is an orthogonal matrix composed of eigenvectors corresponding to the entries of \u039b ( the eigenvalues ) ., When the Jacobian matrix for a first order differential equation with respect to time is decomposed in this manner , the negative reciprocals of the eigenvalues correspond to the characteristic time scales for the corresponding modes 8 ( this is immediately clear upon integration of Equation 4 ) ., All three of the models considered here exhibited at least one pair of complex conjugate eigenvalues at the steady states considered , hence the corresponding complex conjugate modes were combined in order to eliminate oscillating motions ., The calculations for the correlations across progressive time scales were carried out as described in Results\/Discussion ., Once the modal matrix , M\u22121 , was calculated , all pairwise angles between the metabolites ( columns of the modal matrix ) were calculated ( see Equation 5 ) ., The modal matrix is rank ordered from the fastest ( k\\u200a=\\u200a1 ) to the slowest ( k\\u200a=\\u200an ) modes ., The angles between the columns of the modal matrix were recalculated n\u22121 more times , in which an additional row of the modal matrix is zeroed out at each iteration ., For example at the third iteration ( k\\u200a=\\u200a2 ) , the first two rows of the modal matrix have been zeroed out ., The spectrum of correlation cut-off values for pooling were considered from 10% to 99% ., Cut-off values in the range 85% to 95% resulted in pooling of variables most consistent with the known pooling structures of the human red cell 2 , 5 ., A value of 90% was used as the correlation cutoff for the red cell , folate , and yeast glycolysis models ., The angle between two zero vectors was classified as undefined and the angle between any zero vector and another vector with at least one non-zero element was defined as 90\u00b0 ., Fragmentation of the pooling structure , in the strictest sense , was identified by any 0 entry ( or <\u223c10\u221213 ) in the final row of the metabolite modal matrix ., Values for the Gibbs standard free energies of formation for the metabolites in the human red cell model were used from 25 .","headings":"Introduction, Results\/Discussion, Materials and Methods","abstract":"The study of dynamic functions of large-scale biological networks has intensified in recent years ., A critical component in developing an understanding of such dynamics involves the study of their hierarchical organization ., We investigate the temporal hierarchy in biochemical reaction networks focusing on: ( 1 ) the elucidation of the existence of \u201cpools\u201d ( i . e . , aggregate variables ) formed from component concentrations and ( 2 ) the determination of their composition and interactions over different time scales ., To date the identification of such pools without prior knowledge of their composition has been a challenge ., A new approach is developed for the algorithmic identification of pool formation using correlations between elements of the modal matrix that correspond to a pair of concentrations and how such correlations form over the hierarchy of time scales ., The analysis elucidates a temporal hierarchy of events that range from chemical equilibration events to the formation of physiologically meaningful pools , culminating in a network-scale ( dynamic ) structure\u2013 ( physiological ) function relationship ., This method is validated on a model of human red blood cell metabolism and further applied to kinetic models of yeast glycolysis and human folate metabolism , enabling the simplification of these models ., The understanding of temporal hierarchy and the formation of dynamic aggregates on different time scales is foundational to the study of network dynamics and has relevance in multiple areas ranging from bacterial strain design and metabolic engineering to the understanding of disease processes in humans .","summary":"Cellular metabolism describes the complex web of biochemical transformations that are necessary to build the structural components , to convert nutrients into \u201cusable energy\u201d by the cell , and to degrade or excrete the by-products ., A critical aspect toward understanding metabolism is the set of dynamic interactions between metabolites , some of which occur very quickly while others occur more slowly ., To develop a \u201csystems\u201d understanding of how networks operate dynamically we need to identify the different processes that occur on different time scales ., When one moves from very fast time scales to slower ones , certain components in the network move in concert and pool together ., We develop a method to elucidate the time scale hierarchy of a network and to simplify its structure by identifying these pools ., This is applied to dynamic models of metabolism for the human red blood cell , human folate metabolism , and yeast glycolysis ., It was possible to simplify the structure of these networks into biologically meaningful groups of variables ., Because dynamics play important roles in normal and abnormal function in biology , it is expected that this work will contribute to an area of great relevance for human disease and engineering applications .","keywords":"mathematics, biochemistry\/chemical biology of the cell, biochemistry\/bioinformatics, computational biology\/metabolic networks, biotechnology\/bioengineering, biochemistry\/theory and simulation","toc":null} +{"Unnamed: 0":2247,"id":"journal.pcbi.1000397","year":2009,"title":"Integrating Statistical Predictions and Experimental Verifications for Enhancing Protein-Chemical Interaction Predictions in Virtual Screening","sections":"In the early stages of the drug discovery process , prediction of the binding of a chemical compound to a specific protein can be of great benefit in the identification of lead compounds ( candidates for a new drug ) ., Moreover , the effective screening of potential drug candidates at an early stage generates large cost savings at a later stage of the overall drug discovery process ., In the field of virtual screening for the drug discovery , docking analyses and molecular dynamics simulations have been the principal methods used for elucidating the interactions between proteins and small molecules 1\u20134 ., Fast and accurate statistical prediction methods for binding affinities of any pair of a protein and a ligand have also been proposed for the case where information regarding 3D structures , binding pockets and binding affinities ( e . g . pKi ) for a sufficient number of pairs of proteins and chemical compounds is available 5 ., However , the requirement of these programs for 3D structural information is a severe disadvantage , as the availability of these data is extremely limited ., Although a number of structures in PDB 6 is increasing ( from 23 , 642 structures in 2003 to 48 , 091 structures in 2007 ) , not all proteins which have been derived from many genome-sequencing projects are suitable for experimental structure determination ., Hence , the genome-wide application of these methods is in fact not feasible ., For example , among the GPCRs ( G-protein coupled receptors ) , whose modulation underlies the actions of 30% of the best-known commercial drugs 7 , the full structure of only a few mammalian members , including bovine rhodopsin 8 and human beta 2 adrenoreceptor 9 , is known ., To achieve more comprehensive and faster protein-chemical interaction predictions in the post-genome era producing a vast number of protein sequences whose structural information is not available , it is essential to be able to utilize more readily available biological data and more generally applicable methods which do not require 3D structural data 10\u201312 ., In our previous study , we developed a comprehensively applicable statistical method for predicting the interactions between proteins and chemical compounds by exploiting very general biological data , including amino acid sequences , 2-dimensional chemical structures , and mass-spectrometry ( MS ) data 11 ., These statistical approaches provided a novel framework where the input space consists of pairs of proteins and chemical compounds ., These pairs are classified into binding and non-binding pairs , while most chemoinformatics approaches assess only chemical compounds and classify them according to their pharmacological effects ., Our previous study 11 demonstrated that screening target proteins for a chemical compound could be performed on a genome-wide scale ., This is due to the fact that our method can be applied to all proteins whose amino acid sequences have been determined even though the 3D structural data is not yet available ., Genome-wide target protein predictions were conducted for MDMA , or ecstasy , which is one of the best known psychoactive drugs , from a pool of 13 , 487 human proteins , and known bindings of MDMA were correctly predicted 11 ., Although the method yielded a relatively high prediction performance ( more than 80% accuracy ) in cross-validation and usefulness in the comprehensive prediction of target proteins for a given chemical compound with tens of thousands of prediction targets 11 , it suffered from the problem of predicting many false positives when comprehensive predictions were conducted ., Although these false positives might include some unknown true positives , they were mainly due to the low quality of the negative data , which is one of the common problems in utilizing statistical classification methods such as Support Vector Machines ( SVMs ) and Artificial Neural Networks ( ANNs ) ., In this paper , we describe two strategies , namely two-layer SVM and reasonable negative data design , which are used for the purpose of reducing the number of false positives and improving the applicability of our method for comprehensive prediction ., In two-layer SVM , in which outputs produced by the first-layer SVM model are utilized as inputs to the second-layer SVM , in order to design negative data which produce fewer false positives , we iteratively constructed SVM models or classification boundaries and selected negative sample candidates according to pre-determined rules ., By using these two strategies , the number of predicted candidates was reduced to around 100 ( Table, 1 ) in experiments in which the potential ligands for some druggable proteins ( UniProt ID P10275 ( androgen receptor ) , P11229 ( muscarinic acetylcholine receptor M1 ) and P35367 ( histamine H1 receptor ) ) are predicted on the basis of more than 100 , 000 compounds in the PubChem Compound database ( http:\/\/pubchem . ncbi . nlm . nih . gov\/ ) ., With the aim of validating the usefulness of our method , our proposed prediction model with fewer false positives was applied to the PubChem Compound database in order to predict the potential ligands for the \u201candrogen receptor\u201d , which is one of the genes responsible for prostate cancer ., We verified some of these predictions by measuring the IC50 values in an in vitro assay ., Biological experiments , conducted to verify the computational predictions based on statistical methods , docking methods or molecular dynamics methods , typically involve success as well as failure ., In addition to fast calculation and wide applicability , one of the merits of using statistical methods that involve training with known data is that results obtained by verification experiments can be efficiently utilized as feedback to produce new and more reliable predictions ., Most previous work on virtual screening has focused on the computational prediction and listing of dozens or hundreds of candidates , followed by their experimental verification ., However , only on rare occasions have these experimental results been utilized for the further improvement of computational predictions and experiments ., Moreover , even without verification experiments , additional data acquired from , for example , relevant literature can be used for enhancing the prediction reliability ., Therefore , we propose a strategy based on the effective combination of computational prediction and experimental verification ., Our second computational prediction utilizing feedback from the first experimental verification successfully discovered novel ligands ( Figure 1 and, 2 ) for the androgen receptor ., Our approach suggests the significance of utilizing statistical learning methods and feedback from experimental results in drug lead discovery ., In the following section , we first describe the real application of our method involving the computational prediction , the experimental verification and the feedback , and then explain the computational experiments conducted to verify the usefulness of our computational prediction method in comprehensive prediction ., In bioinformatics , statistical approaches extract rules from numerical data corresponding to biological properties ., Here , it is not guaranteed that the extracted rules are biologically valid , and furthermore it is possible to utilize statistical methods to obtain general rules from any kind of numerical data which are meaningless and irrelevant to biological properties ., The biological relevance of our approach can be verified as follows on the basis of supporting evidence which indicates that our method can extract significant rules only if biologically valid and relevant data is given ., First , high prediction performances on diverse datasets might support the validity of our approach ., In several datasets consisting of known pairs of proteins , including nuclear receptors , GPCRs , ion channels and enzymes , and drugs and random protein-drug pairs , our statistical approach with SVM showed high prediction performances ( details are provided in Text S1 , Table S1 and Figure S2 ) ., The fact that more than 0 . 85 AUC and an accuracy of 80% were obtained for diverse datasets suggests that it is possible to extract some properties accountable for interactions between proteins and drugs by statistical approaches ., This possibility can be further supported by the fact that integrating several datasets whose target proteins were not relevant to each other improved the prediction performances with respect to pairs of proteins and chemical compounds which had a specific binding mode ( details are provided in Text S1 and Table S2 ) ., Second , we showed the biological relevance of these high prediction performances by calculating the prediction performances using biologically meaningless artificial datasets as positives ., Several datasets which contained fractions of valid samples found in the DrugBank dataset , and which comprised artificial pseudo-positive samples of protein-chemical pairs produced by shuffling with the same frequency of chemical compounds and proteins as that in the DrugBank dataset , were generated ., Our method was applied to these shuffled artificial datasets ( Figure 3 ) ., Here , if our approach did not depend on the biological properties of the given dataset but only succeeded in classifying given pairs comprising a protein and a chemical compound and random pairs derived from them , the prediction accuracy for each shuffled dataset was assumed not to fluctuate ., As shown in Figure 3 , the prediction accuracy was proportional to the content rate of the biologically valid samples ., Therefore , the classification of our approach was shown to function only when a certain amount of biologically valid pairs comprising a protein and a chemical compound are given ., This result suggests that our statistical approach succeeds in extracting the rules which are only relevant for the biological binding properties ., It is often observed that although statistical learning approaches achieve very high prediction performances in given datasets , statistical prediction models suffer from the problem of generating vast prediction sets including many false positives when applied to a huge dataset , such as the PubChem database ., In our approach , SVM models based on feature vectors directly representing amino acid sequences , chemical structures , and random protein-compound pairs as negatives also produced many predictions and inevitably yielded many false positives ( Table 1A random ) ., Upon the introduction of the two-layer SVM and the negatives designed to overcome this drawback , the prediction precision , or the confidence of positive prediction , was significantly improved in computational experiments based on the DrugBank dataset ( Table 2 ) ., In Table 2 , the external dataset consisted of 170 positives and 2 , 450 negatives that were randomly chosen from 1 , 731 positives and 24 , 500 designed negatives with the mlt rule ( details are provided in Materials and Methods ) and that were excluded in constructing first-layer and second-layer SVM models ., The external dataset contained much more negatives than positives as it simulated the real application of virtual screening with vast databases where only a fraction of chemical compounds in the databases have the effect of interest ., Tables 2A and 2B showed improvement of precision by introducing the designed negatives and the two-layer SVM respectively ., Table 2B also indicated that the application of SVM to outputs of the first-layer SVM models was superior to other statistical learning methods 15 and naive combination of the first-layer SVM models , and that rational selection of the first-layer SVM models achieved significant higher precision ( P-value\\u200a=\\u200a0 . 0081 by t test ) than randomly selected models ( other comparisons are provided in Text S1 , Table S3 and Table S4 ) ., Particularly , the second-layer SVM utilizing the allpos first-layer SVM models achieved higher precision than use of higher thresholds in the other SVM models ( Table 2C ) ., The high precision contributes to the selection of more reliable predictions and thus to the reduction of the number of false positives ., Following these results on given datasets , our approaches were evaluated with respect to comprehensive binding ligand prediction ., For three proteins ( UniProt ID P10275 ( androgen receptor ) , P11299 ( muscarinic acetylcholine receptor M1 ) and P35367 ( histamine H1 receptor ) ) , their binding ligands were predicted from PubChem Compound 0000001\u201300125000 which contains 109 , 841 compounds ( Table 1 ) ., Here , P35367 and P11299 are the two most frequently targeted proteins in the DrugBank dataset , and P10275 is a protein of average occurrence in the DrugBank dataset ., Among the 109 , 841 compounds , 47 , 45 , and 5 known ligands were included for P35367 , P11299 , and P10275 , respectively ., As shown in Tables 1A , 1B and 1C , the use of carefully selected negatives , the introduction of the two-layer SVM , and the integration of these two approaches efficiently reduced the number of predictions and thus the number of false positives ., For example , in comparison to Tables 1A and 1C , the number of candidates discovered by using the max dataset in the allpos two-layer SVM approach was about one fiftieth of the number of chemical compounds predicted by using the random negative dataset in the one-layer SVM ., Furthermore , in comparison to other approaches based solely on the use of chemical compounds ( Tables 1D and 1E ) , our approaches gave a reasonable number of predictions ( other comparisons are described in Text S1 and Tables S5 , S6 , S7 ) ., These results suggest that our prediction models select a reasonable number of ligand candidates from all chemical compounds in large databases and encourage the comprehensive binding ligand prediction for the target protein ., The experimental verification of the computational predictions produces feedback data or samples which are not included in the given training datasets ., The efficient utilization of these data can contribute to the fast identification of compounds with the desired properties and can be of advantage to statistical learning approaches ., We compared several strategies for utilizing feedback data as follows ., For three proteins ( UniProt ID P10275 ( androgen receptor ) , P11299 ( muscarinic acetylcholine receptor M1 ) and P353367 ( histamine H1 receptor ) ) , ligand data which were not included in the DrugBank dataset were collected from relevant literature 16\u201318 and public databases , PDSP Ki database 19 and GLIDA 20 , in February 2008 ., Overall , 35 androgen receptor-ligand pairs , 49 muscarinic acetylcholine receptor M1-ligand pairs , and 1 , 060 histamine H1 receptor-ligand pairs were supplemented ., Additional models were constructed by using these supplemental pairs as positives ( details are provided in Text S1 ) ., As shown in Figure 4 , the use of the additional model with a sufficient weighting factor controlled the increase of the predictions with a slight decrease of the recall rate ., The use of large weighting factors results in the relative decrease of the influence of other first-layer SVM models derived from the DrugBank dataset in classification ., However , the low performance of \u201conly additional model:st2\u201d , shown in Figure 4A , where only one first-layer SVM model derived from additional data was used to construct the second-layer SVM model , indicates the need for first-layer SVM models derived from the DrugBank dataset as well as combinations of these first-layer SVM models with an additional first-layer SVM model ., With this efficient strategy for utilizing feedback data , computational prediction and experimental verification improve each other to enable faster search toward the identification of useful small molecules ., We proposed a comprehensively applicable computational method for predicting the interactions between proteins and chemical compounds , in which the number of false positives was reduced in comparison to other methods ., Furthermore , we proposed the strategy for the efficient utilization of experimental feedback and the integration of computational prediction and experimental verification ., The application of our method to the androgen receptor resulted in 67% ( 4\/6 ) prediction precision according to in vitro experimental verification in the first computational prediction and 60% ( 3\/5 ) in the second prediction , which included the feedback of the first experimental verification ., However , these relatively low precision values do not represent the true statistical significance of the method ., This 60\u201370% precision can also be evaluated by using the following P-value . Here , N is the number of prediction targets , M the number of ligands potentially binding to the target proteins , t is the number of tested compounds , and p is the number of true positives ., With N\\u200a=\\u200a19171127 , which is the number of chemical compounds in the PubChem Compound database , and M\\u200a=\\u200a19171127\u00d7 ( 456\/3000 ) \u00d7 ( 7\/964 ) \u225221160 , which is based on the optimistic assumption that all compounds can be regarded as potential drugs for some target protein , it is estimated that 3 , 000 druggable proteins exist 21 ., Moreover , the distribution of target proteins and drugs in the DrugBank dataset , consisting of 456 target proteins and 964 drugs , including 7 known ligands for the human androgen receptor , and P-values of 2\u223621\u00d710\u221211 and 1\u223634\u00d710\u22128 are obtained for the prediction precision of the first and the second computational prediction , respectively ., These extremely small P-values prove the significance of the virtual screening and its precision in the drug discovery process ., These prediction performances are as good as or better than several previous virtual screening studies based mainly on docking analyses 22\u201324 ., For example , at a threshold of 100 \u00b5M , 7% precision ( 3\/39 ) for Mycobacterium tuberculosis adenosine 5\u2032-phosphosulfate reductase 22 , 71% precision ( 22\/31 ) for Staphylococcus aureus methyonyl-tRNA synthetase 23 and 8% precision ( 16\/192 ) for human DNA ligase I 24 were obtained , respectively ., In addition , 0 . 566 AUC was achieved in the docking analysis using AutoDock 3 ( Figure 5 ) for the 17 chemical compounds ( 12 chemical compounds verified in the first experimental verification , with the exception of 6 known drugs , and 5 chemical compounds verified in the second experimental verification ) ., In contrast , 0 . 681 AUC was obtained with our method ., Here , in the calculation of AUC , the threshold level of IC50\\u200a=\\u200a100 \u00b5M for experimental verification was used to define a label ( binding or non-binding ) for each chemical compound , and or the predicted probability was regarded as a value for each molecule ., Note that the docking analysis with AutoDock was not applied to the 19 , 171 , 127 compounds in the PubChem Compound database for the screening purpose , but was applied only to 17 compounds , which were the results of virtual screening by our method ., In terms of computational time , for binding prediction of one pair of a protein and a chemical compound , using one Opteron 275 2 . 2 GHz CPU , AutoDock took approximately 100 minutes on average with 100 genetic algorithm ( GA ) runs , while our method required less than 0 . 3 seconds ., These computational time comparisons indicate that our method can perform a virtual screening of more than 19 million chemical compounds from the PubChem Compound database for any proteins in genome-wide scale and this immense screening task would be infeasible to accomplish with any of the existing docking methods ., Therefore , our statistical approach can contribute as the first fast and rather accurate virtual screening tool for the drug discovery process ., It can be followed by the application of more time-consuming but more informative approaches , such as docking analysis and molecular dynamics analysis , which can provide information regarding the binding affinities and the molecular binding mechanisms to outputs of the first screening ., In another perspective , the re-evaluation of statistical prediction approaches by using 23 chemical compounds experimentally verified in this study showed that our proposed methods , which utilized information of both protein sequence and chemical structures , were superior to a conventional LBVS ( Ligand Based Virtual Screening ) method where only structures of specific chemical compounds were considered ( Figure 6 ) ., As shown in Figure 6A , our proposed methods ( \u201cone-layer SVM\u201d , \u201ctwo-layer SVM-subpos\u201d and \u201ctwo-layer SVM-allpos\u201d ) achieved a higher recall rate at ranks higher than 500 compared to a conventional Ligand Based Virtual Screening method ( \u201conly compound SVM\u201d in Figure 6A ) ., The fact that experimentally verified chemical compounds were identified at higher ranks in the pool by our proposed prediction models suggests that our proposed models were highly efficient with respect to the screening method ., Figure 6B also shows that our proposed methods were more successful at discriminating between 15 experimentally verified binding and 8 non-binding ligands better than the LBVS method ., These comparisons suggest that our proposed method utilizing information of protein sequences as well as chemical structures can be regarded as a more useful substitute for usual ligand-based virtual screening methods utilizing only chemical structures ., Furthermore , the fact that the second computational prediction , or the use of feedback data , contributed to the discovery of novel ligands ( Figure 2B\u2013D ) supports the utilization of statistical learning methods in virtual screening ., Regarding the computational prediction method used in this paper , we made the method available to the public as a web-based service named COPICAT ( COmprehensive Predictor of Interactions between Chemical compounds And Target proteins; http:\/\/copicat . dna . bio . keio . ac . jp\/ ) ., The DrugBank dataset was constructed from Approved DrugCards data , which were downloaded in February , 2007 from the DrugBank database 25 ., These data consist of 964 approved drugs and their 456 associated target proteins , constituting 1 , 731 interacting pairs or positives ., Given Np positive and Nn negative samples in known data and Mp positives and Mn negatives in additional or feedback data , a straightforward strategy for the integration of additional data into statistical training , such as SVM , is to train a statistical model based on a dataset consisting of Np+Mp positives and Nn+Mn negatives ., When the two-layer SVM strategy is applied , another strategy of feedback and supplement involves the utilization of an additional model based on additional data ., In this strategy , the second-layer SVM is trained on the basis of Np+Mp positives and Nn+Mn negatives , and a sample si in the second layer is represented as follows , Here , is an output of the additional model trained on the basis of Mp positives and Mn negatives ., is an output of the first-layer SVM model j , and is a weighting factor ., AutoDock 4 3 was applied to the human androgen receptor ligand-binding domain ( PDB code; 2AM9 31 ) and tested compounds whose 3D structure was generated by Obgen in the Open Babel package ver . 2 . 2 . 0 32 or CORINA 33 ., The conditions of AutoDock followed Jenwitheesuk and Samudrala , 2005 34 ., ARG752 of 2AM9 , which was considered important for the binding of androgens by the human androgen receptor 31 , was set to a flexible residue in AutoDock .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Predictions of interactions between target proteins and potential leads are of great benefit in the drug discovery process ., We present a comprehensively applicable statistical prediction method for interactions between any proteins and chemical compounds , which requires only protein sequence data and chemical structure data and utilizes the statistical learning method of support vector machines ., In order to realize reasonable comprehensive predictions which can involve many false positives , we propose two approaches for reduction of false positives:, ( i ) efficient use of multiple statistical prediction models in the framework of two-layer SVM and, ( ii ) reasonable design of the negative data to construct statistical prediction models ., In two-layer SVM , outputs produced by the first-layer SVM models , which are constructed with different negative samples and reflect different aspects of classifications , are utilized as inputs to the second-layer SVM ., In order to design negative data which produce fewer false positive predictions , we iteratively construct SVM models or classification boundaries from positive and tentative negative samples and select additional negative sample candidates according to pre-determined rules ., Moreover , in order to fully utilize the advantages of statistical learning methods , we propose a strategy to effectively feedback experimental results to computational predictions with consideration of biological effects of interest ., We show the usefulness of our approach in predicting potential ligands binding to human androgen receptors from more than 19 million chemical compounds and verifying these predictions by in vitro binding ., Moreover , we utilize this experimental validation as feedback to enhance subsequent computational predictions , and experimentally validate these predictions again ., This efficient procedure of the iteration of the in silico prediction and in vitro or in vivo experimental verifications with the sufficient feedback enabled us to identify novel ligand candidates which were distant from known ligands in the chemical space .","summary":"This work describes a statistical method that identifies chemical compounds binding to a target protein given the sequence of the target or distinguishes proteins to which a small molecule binds given the chemical structure of the molecule ., As our method can be utilized for virtual screening that seeks for lead compounds in drug discovery , we showed the usefulness of our method in its application to the comprehensive prediction of ligands binding to human androgen receptors and in vitro experimental verification of its predictions ., In contrast to most previous virtual screening studies which predict chemical compounds of interest mainly with 3D structure-based methods and experimentally verify them , we proposed a strategy to effectively feedback experimental results for subsequent predictions and applied the strategy to the second predictions followed by the second experimental verification ., This feedback strategy makes full use of statistical learning methods and , in practical terms , gave a ligand candidate of interest that structurally differs from known drugs ., We hope that this paper will encourage reevaluation of statistical learning methods in virtual screening and that the utilization of statistical methods with efficient feedback strategies will contribute to the acceleration of drug discovery .","keywords":"chemical biology, mathematics\/statistics, pharmacology\/drug development, computational biology","toc":null} +{"Unnamed: 0":1741,"id":"journal.pcbi.1004933","year":2016,"title":"Structural Determinants of Misfolding in Multidomain Proteins","sections":"Protein misfolding and aggregation are well-known for their association with amyloidosis and other diseases 1 , 2 ., Proteins with two or more domains are abundant in higher organisms , accounting for up to 70% of all eukaryotic proteins , and domain-repeat proteins in particular occupy a fraction up to 20% of the proteomes in multicellular organisms 3 , 4 , therefore their folding is of considerable relevance 5 ., Since there is often some sequence similarity between domains with the same structure , it is easily possible to imagine that multidomain proteins containing repeats of domains with the same fold might be susceptible to misfolding ., Indeed , misfolding of multidomain proteins has been observed in many protein families 6 ., Single molecule techniques have been particularly powerful for studying folding\/misfolding of such proteins , in particular F\u00f6rster resonance energy transfer ( FRET ) and atomic force microscopy ( AFM ) ., For instance , recent studies using single-molecule FRET , in conjunction with coarse-grained simulations , have revealed the presence of domain-swapped misfolded states in tandem repeats of the immunoglobulin-like domain I27 from the muscle protein Titin 7 ( an example is shown in Fig 1e ) ., Domain-swapping 2 involves the exchange of secondary structure elements between two protein domains with the same structure ., Remarkably , these misfolded states are stable for days , much longer than the unfolding time of a single Titin domain ., The domain-swapped misfolds identified in the Titin I27 domains are also consistent with earlier observations of misfolding in the same protein by AFM , although not given a structural interpretation at the time 8 ., In addition , AFM experiments have revealed what appears to be a similar type of misfolding in polyproteins consisting of eight tandem repeats of the same fibronectin type III domain from tenascin ( TNfn3 ) 9 , as well as in native constructs of tenascin 8 , and between the N-terminal domains of human \u03b3D-crystallin when linked in a synthetic oligomer 10 ., In addition to domain-swapped misfolding , an alternative type of misfolded state is conceivable for polyproteins in which the sequences of adjacent domains are similar , namely the formation of amyloid-like species with parallel \u03b2-sheets ., Theoretical work in fact made the prediction that such species would be formed in tandem repeats of titin domains 11 ., Recently , time-resolved single-molecule FRET experiments on tandem domains of I27 have revealed a surprising number of intermediates formed at short times , which include an unexpected species that appears to be consistent with the previously suggested amyloid-like state 12 ., However , since only the domain-swapped species persisted till long times , and therefore are the most likely to be problematic in cells , we focus on their formation in this work ., A simplified illustration of the mechanism for folding and misfolding , based on both coarse-grained simulations as well as single-molecule and ensemble kinetics 7 , 12 , is shown in Fig 1 , using the Titin I27 domain as an example ., Starting from the completely unfolded state in Fig 1a , correct folding would proceed via an intermediate in which either one of the domains is folded ( Fig 1b ) , and finally to the fully folded state , Fig 1c ., The domain-swapped misfolded state , an example of which is shown in Fig 1e , consists of two native-like folds which are in fact assembled by swapping of sequence elements from the N- and C-terminal portions of the protein ., The final structure in Fig 1e comprises what we shall refer to as a \u201ccentral domain\u201d formed by the central regions of the sequence ( on the left in Fig 1e ) and a \u201cterminal domain\u201d formed from the N- and C-termini ( on the right ) ., The intermediate structure in Fig 1d , suggested by coarse-grained simulations 7 , and supported by experiment 12 , has only the central domain folded ., This central domain can itself be viewed as a circular permutant 13 of the original native Titin I27 structure , as discussed further below ., While domain-swapped misfolding of tandem repeats has been identified in a number of proteins to date , there are several other proteins for which it does not occur to a detectable level ., For instance , extensive sampling of repeated unfolding and folding of a polyprotein of Protein G ( GB1 ) by AFM revealed no indication of misfolded states , in contrast to Titin 14 ., Similarly , early AFM studies on polyUbiquitin also did not suggest misfolded intermediates in constant force unfolding 15\u201320 , and lock-in AFM studies of refolding 21 were fully consistent with a two-state folding model , without misfolding ., More recent AFM 22 studies have suggested the formation of partially folded or misfolded species , which have been attributed to partial domain swapping in simulations 23 , but these are qualitatively different from the fully domain-swapped species considered here ., Therefore , it is interesting to ask the general questions: when included in tandem repeats , what types of protein structures are most likely to form domain-swapped misfolded states , and by what mechanism ?, In order to investigate the misfolding propensity of different types of domains , we have chosen seven domains , based on, ( i ) the superfamilies with the largest abundance of repeats in the human genome 24 ,, ( ii ) proteins for which some experimental evidence for misfolding ( or lack thereof ) is available and, ( iii ) proteins for which data on folding kinetics and stability is available for their circular permutants ( only some of the proteins meet criterion, ( iii ) ) ., The circular permutant data are relevant because the misfolding intermediates suggested by simulations and experiment 7 , 12 can be viewed as circular permutants of the original structure ( Fig 1d ) ., Each of the chosen proteins is illustrated in Fig 2 and described briefly in Materials and Methods ., We study the folding and misfolding of the seven protein domains , using the same structure-based model as that successfully employed to treat Titin I27 7 , 12 ., Molecular simulations are carried out to characterize the possible structural topologies of the misfolded intermediates and the mechanism of their formation ., Our model is consistent with available experimental information for the systems studied , in terms of which proteins misfold and what misfolded structures they tend to form ., We then investigated what factors influence the propensity of multidomain proteins to misfold ., The simplest rationalization of the propensity of a multidomain protein for domain-swapped misfolding would seem to be offered by parameterizing a kinetic model based on the scheme shown in Fig 1 , particularly for the steps Fig 1a\u20131b versus 1a\u20131d ., We hypothesized that the propensity to misfold might be characterized in terms of the folding kinetics of the isolated circular permutants representing the domain-swapped intermediates in Fig 1d ., However , contrary to this expectation , we found that the stability of such isolated domains , rather than their folding rate , is the main determinant of misfolding propensity ., Although superficially this appears to differ from previously suggested kinetic models 12 , it is completely consistent , with a specific interpretation of the rates ., Building on this understanding , we developed a very simplified model which can be used to predict which domains are likely to be susceptible to domain-swapped misfolding ., Finally , we have investigated the effect of the composition and length of the linker between the tandem repeats on the misfolding propensity ., Tandem Src homology 3 ( SH3 ) domains ( Fig 2a ) are widely found in signal transduction proteins and they share functions such as mediating protein-protein interactions and regulating ligand binding 25 ., Kinetic and thermodynamic properties of native and all the possible circular permutations of SH3 single domain have been well characterized 26 ., Two different circular permutant constructs of the sequence are known to fold to a circularly permuted native conformation ( PDB accession codes are 1TUC and 1TUD ) that is similar to the wild-tpe ( WT ) protein 26 ., With a similar function to the SH3 domains , Src homology 2 ( SH2 ) domains ( Fig 2b ) are also involved in the mediation of intra- and intermolecular interactions that are important in signal transduction 27 ., The SH2 domains are well-known from crystallographic analysis to form metastable domain-swapped dimers 28 , 29 ., Fibronectin type III ( fn3 ) domains ( Fig 2c ) are highly abundant in multidomain proteins , and often involved in cell adhesion ., We have chosen to study the third fn3 domain of human tenascin ( TNfn3 ) , which has been used as a model system to study the mechanical properties of this family ., Single-molecule AFM experiments revealed that a small fraction ( \u223c 4% ) of domains in native tenascin ( i . e . the full tenascin protein containing both TNfn3 and other fn3 domains ) 8 , with a similar signature to that observed for I27 ., Subsequently , misfolding events have been identified in a polyprotein consisting of repeats of TNfn3 only 9 ., Interestingly , a structure has been determined for a domain-swapped dimer of TNfn3 involving a small change of the loop between the second and third strand 30 ., PDZ domains ( Fig 2d ) are one of the most common modular protein-interaction domains 31 , recognizing specific-sequence motifs that occur at the C-terminus of target proteins or internal motifs that mimic the C-terminus structurally 32 ., Naturally occurring circularly permuted PDZ domains have been well studied 33\u201335 , and domain-swapped dimers of PDZ domains have been characterized by NMR spectroscopy 36 , 37 ., Titin ( Fig 2e ) is a giant protein spanning the entire muscle sarcomere 38 ., The majority of titin\u2019s I-band region functions as a molecular spring which maintains the structural arrangement and extensibility of muscle filaments 39 ., The misfolding and aggregation properties of selected tandem Ig-like domains from the I-band of human Titin ( I27 , I28 and I32 ) have been extensively studied by FRET experiments 7 , 24 ., In the earlier work on tandem repeats of I27 domains , around 2% misfolding events were reported in repeated stretch-release cycles in AFM experiments 8 ., A slightly larger fraction ( \u223c 6% ) of misfolded species was identified in single-molecule FRET experiments and rationalized in terms of domain swapped intermediates , captured by coarse-grained simulations 7 , 11 ., In contrast , with the above misfolding-prone systems , there are certain polyprotein chains have been shown be resistant to misfolding , according to pulling experiments ., For instance little evidence for misfolding was identified in a polyprotein of GB1 14 ( Fig 2g ) , with more than 99 . 8% of the chains ( GB1 ) 8 folding correctly in repetitive stretching\u2013relaxation cycles 14 ., Lastly , we consider polyUbiquitin ( Fig 2f ) , for which there is conflicting experimental evidence on misfolding ., Initial force microscopy studies showed only the formation of native folds 15 , with no misfolding ., Later work suggested the formation of collapsed intermediates 22 , however the signature change in molecular extension of these was different from that expected for fully domain-swapped misfolds ., A separate study using a lock-in AFM 21 found Ubiquitin to conform closely to expectations for a two-state folder , without evidence of misfolding ., For this protein , there is a strong imperative to avoid misfolding , since Ubiquitin is initially expressed as a tandem polyUbiquitin chain in which adjacent domains have 100% sequence identity , yet this molecule is critical for maintaining cellular homeostasis 40 ., A coarse grained structure-based ( Go-like ) model similar to the earlier work is employed for the study here 7 , 41 ., Each residue is represented by one bead , native interactions are attractive and the relative contact energies are set according to the Miyazawa\u2013Jernigan matrix ., The model is based on that described by Karanicolas and Brooks 41 , but with native-like interactions allowed to occur between domains as well as within the same domain , as described below 7 ., All the simulations are run under a modified version of GROMACS 42 ., For the seven species we studied in this work , the native structures of single domains that were used to construct the models for SH3 , SH2 , PDZ , TNfn3 , Titin I27 , GB1 and Ubiquitin correspond to PDB entries 1SHG 43 , 1TZE 44 , 2VWR , 1TEN 45 , 1TIT 46 , 1GB1 47 and 1UBQ 48 respectively ., For the single domains of SH3 ( 1SHG ) , TNfn3 ( 1TEN ) and GB1 ( 1GB1 ) , additional linker sequences of Asp-Glu-Thr-Gly , Gly-Leu and Arg-Ser , respectively , are added between the two domains to mimic the constructs used in the corresponding experiments 9 , 14 , 26 ., Construction of the Titin I27 model was described in our previous work 7 ., In order to allow for domain-swapped misfolding , the native contact potentials within a single domain are also allowed to occur between corresponding residues in different domains , with equal strength ., Specifically , considering each single repeat of the dimeric tandem that has L amino acids , given any pair of residues ( with indices i and j ) that are the native interactions within a single domain , the interaction energy for the intradomain interaction ( Ei , j ( r ) ) is the same as the interdomain interaction between the residue ( i or j ) and the corresponding residue ( j + L or i + L ) in the adjacent domain , i . e . Ei , j ( r ) = Ei+L , j ( r ) = Ei , j+L ( r ) = Ei+L , j+L ( r ) ., To investigate the folding kinetics of the dimeric tandem , a total of 1024 independent simulations are performed on each system for a duration of 12 microseconds each ., Different misfolding propensities are observed at the end of the simulations ., With the exception of Ubiquitin and GB1 , the vast majority of the simulations reached stable native states with separately folded domains ., A small fraction of simulations form stable domain-swapped misfolded states ., All the simulations are started from a fully extended structure , and run using Langevin dynamics with a friction of 0 . 1 ps\u22121 and a time step of 10 fs ., We note that all the generated domain-swapped misfolding structures , containing the central and terminal domains , can be monitored by a reaction coordinate based on circular permutated native-like contact sets ., Each circularly permuted misfold can be characterized according to the loop position K in sequence where the native domain would be cut to form the circular permutant ( K = 0 corresponds to the native fold ) ., If a native contact Cnative = ( i , j ) exists between residues i and j in the native fold , the corresponding native-like contacts for the central ( Cin ( K ) ) and terminal domains ( Cout ( K ) ) of the domain swapped conformation are generated as, C i n ( K ) = ( i + \u0398 ( K \u2212 i ) L , j + \u0398 ( K \u2212 j ) L ) , C o u t ( K ) = ( i + \u0398 ( i \u2212 K ) L , j + \u0398 ( j \u2212 K ) L ) ,, where \u0398 ( x ) is the Heaviside step function and L is the length of each single domain ( plus interdomain linker ) ., Sin , K is the set of native-like contacts Cin of the central domain , and Sout , K is the set of all the native-like contacts Cout of the terminal domain ., Sin , K and Sout , K can be used to define a contact-based reaction coordinate to analyze the kinetics of the dimeric tandem misfolding ., The corresponding fraction of contacts for the central domain could be calculated by:, QK ( \u03c7 ) =1N\u2211 ( i , j ) \u2208Sin , K11+e\u03b2 ( rij ( \u03c7 ) \u2212\u03bbrij0 ) , ( 1 ), where N is the total number of domain swapped contacts , SK = Sin , K \u222a Sout , K ( equal to the total number of native contacts ) , rij ( \u03c7 ) is the distance between residue i and j in the protein configuration \u03c7 ., r i j 0 is the corresponding distance in the native structure for native-like contacts , \u03b2 = 50 nm\u22121 and \u03bb = 1 . 2 is used to account for fluctuations about the native contact distance ., The equilibrium properties of a single domain of each system are obtained from umbrella sampling along the native contacts Q as the reaction coordinate ., The obtained melting temperature of each system is listed in Table A in S1 Text ., A temperature at which the folding barrier \u0394Gf of approximately \u223c 2 . 5 kBT is chosen for the 2-domain tandem simulations for reasons described below ., The stability \u0394Gs is calculated as, \u0394 G s = - k B T ln \u222b Q \u2021 1 e - F ( Q ) \/ k B T d Q \/ \u222b 0 Q \u2021 e - F ( Q ) \/ k B T d Q , ( 2 ), where kB and T are the Boltzmann constant and temperature respectively ., Q\u2021 is the position of the barrier top in F ( Q ) , separating the folded and unfolded states and F ( Q ) represents the free energy profile on Q . Barrier heights \u0394Gf were simply defined as \u0394Gf = G ( Q\u2021 ) \u2212 G ( Qu ) , where Qu is the position of the unfolded state free energy minimum on Q . We calculated the relative contact order 49 , RCOK of different circular permutants K via, RCO K = 1 L \u00b7 N \u2211 ( i , j ) \u2208 S in , K | i - j | , ( 3 ), where L is the length of the single domain , and N is the total number of the native like contacts ( the same for different K ) ., Sin , K is the contacts set of the circular permutant corresponding to the \u201ccentral domain\u201d of the misfolded state ., Note that the contact order calculation here is using residue-based native contacts ( the same ones defined as attractive in the G\u014d model ) , instead of all atom native contacts ., An Ising-like model was built based on the native contact map , in which each residue is considered either folded or unfolded and so any individual configuration can be specified as a binary sequence , in a similar spirit to earlier work 50\u201352 ., Interactions between residues separated by more than two residues in the sequence are considered ., To simplify the analysis , we also consider that native structure grows only in a single stretch of contiguous native residues ( native segment ) , which means the configurations such as \u2026UFFFUUUUU\u2026 or \u2026UUUUUFFFU\u2026 are allowed , however , \u2026UFFFUUUFFFU\u2026 is not allowed ( \u201csingle sequence approximation\u201d ) 50 ., Each residue which becomes native incurs an entropy penalty \u0394S , while all possible native contacts involving residues within the native segment are considered to be formed , each with a favourable energy of contact formation \u03f5 ., The partition function for such a model can be enumerated as:, Z = \u2211 \u03c7 exp \u2212 G ( \u03c7 ) k B T = \u2211 \u03c7 exp \u2212 n ( \u03c7 ) \u03f5 \u2212 N f ( \u03c7 ) T \u0394 s k B T , where kB and T are the Boltzmann constant and temperature ., G ( \u03c7 ) is the free energy determined by the number of native contacts n ( \u03c7 ) in the configuration \u03c7 , and the number of native residues , Nf ( \u03c7 ) ., The distribution of the microstates ( \u03c7 ) can be efficiently generated by the Metropolis-Hastings method with Monte Carlo simulation ., In each iteration , the state of one randomly chosen residue ( among the residues at the two ends of the native fragment and their two neighbouring residues ) is perturbed by a flip , from native to unfolded or from unfolded to native , taking the system from a microstate \u03c71 with energy E1 to a microstate \u03c72 with energy E2 ., The new microstate is subject to an accept\/reject step with acceptance probability, P acc = min 1 , exp ( - E 2 - E 1 k B T ) ., ( 4 ) To mimic the folding stability difference between native and circular permutant folds , a penalty energy term Ep has been added whenever the native fragment crosses the midpoint of the sequence from either side ( the function \u03b8 ( \u03c7 ) above is 1 if this is true , otherwise zero ) ., That situation corresponds to formation of a domain-swapped structure , in which there is additional strain energy from linking the termini , represented by Ep ., We only use the Ising model here to investigate formation of the first domain ( either native or circular permutant ) , by rejecting any proposed Monte Carlo step that would make the native segment longer than the length of single domain , L ., In order to characterize the potential misfolding properties of each type of domain , we have used a G\u014d-type energy function based on the native structure ., Such models have successfully captured many aspects of protein folding , including \u03d5-values 53 , 54 , dimerization mechanism 55 , 56 , domain-swapping 57\u201360 , and the response of proteins to a pulling force 61 , 62 ., More specifically , a G\u014d type model was used in conjunction with single-molecule and ensemble FRET data to characterize the misfolded states and misfolding mechanism of engineered tandem repeats of Titin I27 7 , 12 ., We have therefore adopted the same model ., Although it is based on native-contacts , it can describe the type of misfolding we consider here , which is also based on native-like structure ., Note that this model effectively assumes 100% sequence identity between adjacent domains , the scenario that would most likely lead to domain-swap formation ., It is nonetheless a relevant limit for this study , as there are examples in our data set of adjacent domains having identical sequences which do misfold ( e . g . titin I27 ) and those which do not ( e . g . protein G ) ., For each of the folds shown in Fig 2 , we ran a large number of simulations , starting from a fully extended , unfolded chain , for sufficiently long ( 12 \u03bcs each ) such that the vast majority of them reached either the correctly folded tandem dimer , or a domain-swapped misfolded state similar to that shown in Fig 1e for titin ., In fact , for each protein , a number of different misfolded topologies are possible , illustrated for the Src SH3 domain in Fig 3 ., Each of these domains , shown in conventional three-dimensional cartoon representation in the right column of Fig 3 and in a simplified two-dimensional topology map in the left column , consists of two native-like folded ( or misfolded ) domains ., For convenience , we call the domain formed from the central portion of the sequence the \u201ccentral domain\u201d and that from the terminal portions the \u201cterminal domain\u201d ., We have chosen to characterize each topology in terms of the position , K , in sequence after which the central domain begins ., Thus , the native fold has K = 0 , and all the misfolded states have K > 0 . Typically , because of the nature of domain swapping , K must fall within a loop ., Of course , there is a range of residues within the loop in question that could be identified as K and we have merely chosen a single K close to the centre of the loop ., This position , and the central domain , are indicated for the Src SH3 misfolded structures in Fig 3 ., We note that each of these central domains can also be considered as a circular permutant of the native fold , in which the ends of the protein have been joined and the chain has been cut at position K . With this nomenclature in hand , we can more easily describe the outcome of the folding simulations for the seven domain types considered in terms of the fraction of the final frames that belonged to the native fold , versus each of the possible misfolded states ., These final populations are shown in Table 1 ., We see that for five of the domains ( SH3 , SH2 , PDZ , TNfn3 , Titin I27 ) , misfolded structures are observed , with total populations ranging from 5\u201310% ., For the remaining two domains , Ubiquitin ( UBQ ) and protein G ( GB1 ) , no misfolded population is observed ., The ability to capture domain-swapped misfolds with simple coarse-grained simulations potentially allows us to investigate the origin of the misfolding , and its relation , if any , to the topology of the domain in question ., However , we also need to benchmark the accuracy of the results against experiment as far as possible , in order to show that they are relevant ., There are two main sources of information to validate our results ., The first is the overall degree of domain-swapped misfolding for those proteins where it has been characterized , for example by single molecule AFM or FRET experiments ., Qualitatively we do observe good agreement , where data is available: in experiment , domains which have been shown to misfold are TNfn3 ( AFM ) and Titin I27 ( AFM , FRET ) , which are both found to misfold here , while there is no detectable misfolded population for protein G ( AFM ) , again consistent with our results ., We also do not observe any misfolding for Ubiquitin , consistent with the lack of experimental evidence for fully domain-swapped species for this protein 15\u201323 ., Quantitatively , the fractional misfolded population is also consistent with the available experimental data ., For instance , the frequency of misfolded domains in native tenascin is \u223c 4% as shown by previous AFM experiments 8 , the misfolded population of I27 dimers is \u223c5% in single-molecule FRET experiments 7 while the misfolded population of GB1 domains in polyproteins ( GB18 ) is extrememly low ( < 0 . 2% ) 14 ., Even though the observed misfolding population of the misfolded tandem dimer is low , it is potentially a problem considering that many of the multidomain proteins in nature have large number of tandem repeats , such as Titin which contains twenty-two I27 repeats 63 ., Recent FRET experiments on I27 tandem repeats have shown that the fraction of misfolded proteins increases with the number of repeats ., For the 3- and 8-domain polyproteins , the fraction of misfolded domains increases by a factor of 1 . 3 and 1 . 8 , respectively , relative to a tandem dimer 12 ., The second type of evidence comes from experimental structures of domain-swapped dimers ., For several of the proteins , bimolecular domain-swapped structures have been determined experimentally ., While no such structures have yet been determined for single-chain tandem dimers , we can compare the misfolded states with the available experimental data ., For each experimental example , we are able to find a corresponding misfolded species in our simulation with very similar structure ( related by joining the terminis of the two chains in the experimental structures ) ., The domain swapped dimers solved obtained from experiments ( Fig 4a , 4c , 4e and 4g ) are strikingly similar to the domain swapping dimeric tandem from simulations , which are the domain swapped SH3 domains when K ( sequence position after which the central domain begins ) = 37 ( Fig 4b ) , SH2 with K = 72 ( Fig 4d ) , TNfn3 with K = 28 ( Fig 4f ) and PDZ with K = 23 ( Fig 4h ) ., Most of these states have relatively high population among all the possible misfolds as observed from the simulations ( \u201cPopulation\u201d in Table 1 ) ., While the coverage of possible domain swaps is by no means exhaustive , the observed correspondence gives us confidence that the misfolded states in the simulations are physically plausible ., Having shown that the misfolding propensities we obtain are qualitatively consistent with experimental evidence ( and in the case of Titin I27 , in semi-quantitative agreement with single-molecule FRET ) , we set out to establish some general principles relating the properties of each domain to its propensity to misfold in this way ., We can start to formulate a hypothesis based on the alternative folding and misfolding pathways illustrated in Fig 1 ., Native folding has as an intermediate a state in which either the N- or the C-terminal domain is folded ., In contrast , on the misfolding pathway , the first step is formation of the central domain , followed by that of the terminal domain ., This parallel pathway scheme suggests that a descriptor of the overall misfolding propensity may be obtained from the rate of formation of a single correctly folded domain , relative to that of the central domain ( neglecting back reactions , because this are rarely seen in our simulations ) ., We can study the central domain formation in isolation , since these structures are just circular permutants of the native fold , i . e . the two proteins have the same sequence as the native , but with the position of the protein termini moved to a different point in the sequence , as is also found in nature 35 ., These structures can be thought of as originating from the native by cutting a specific loop connecting secondary structure elements ( the free energy cost of splitting such an element being too high ) , and splicing together the N- and C- termini ., In the context of the tandem dimers , the position at which the loop is cut is the same K that defines the start of the central domain in sequence ., We investigate the role of the central domain by characterizing the free energy landscape of the single domain of each system , as well as all of its possible circular permutants , using umbrella sampling along the reaction coordinate QK ., QK is exactly analogous to the conventional fraction of native contacts coordinate Q 64 , but defined using the corresponding ( frame-shifted ) contacts in the circular permutant pseudo-native structure ., The index K indicates the position along the sequence of the WT where the cut is made in order to convert to the circular permutant ., The free energy surfaces F ( QK ) of two representative systems , SH3 and Ubiquitin , are shown in Fig 5 , with the data for the remaining proteins given in the Fig A in S1 Text ., The free energy barrier height for folding \u0394Gf and the stability \u0394Gs are listed in the Table 1 ., The free energy plots indicate that the single domains of Ubiquitin and GB1 are stable only for the native sequence order , and not for any of the circular permutants ., Based on the type of misfolding mechanism sketched in Fig 1 , one would expect that unstable circular permutants would result in an unstable central domain , and consequently no stable domain-swappping misfolding would occur in the dimer folding simulations , as we indeed observe ., This is also consistent with previous studies of polyproteins of GB1 and Ubiquitin using using AFM experiments , which reveal high-fidelity folding and refolding 14 , 65 , 66 ., We note that only under very strongly stabilizing conditions is any misfolding observed for ubiquitin dimers: running simulations at a lower temperature ( 260 K ) , we observe a very small ( 1 . 3% ) population of misfolded states from 1024 trial folding simulations ., At a higher temperature of 295 K , once again no misfolding is observed ., In contrast to the situation for GB1 and Ubiquitin , all of the circular permutants of the SH3 domain in Fig 5 are in fact stable , although less so than the native fold ., The destabilization of circular permutants relative to native is in accord with the experimental results for the Src SH3 domain 26 ( rank correlation coefficient stabilities is 0 . 80 ) ., The other domains considered also have stable circular permutant structures ., This is consistent with the fact that all of these domains do in fact form some fraction of domain-swapped misfolded states ., The simplest view of the misfolding mechanism would be as a kinetic competition between the correctly folded intermediates versus the domain-swapped intermediates with a central domain folded ( i . e . a \u201ckinetic partitioning\u201d mechanism 67 ) ., In this case one might naively expect that the propensity to misfold would be correlated with the relative folding rates of an isolated native domain and an isolated circular permutant structure ., However , the folding barriers \u0394Gf projected onto Q ( for native ) or QK ( for circular permutants ) show little correlation to the relative frequency of the corresponding folded or misfolded state , when considering all proteins ( Table 1 ) ., Since this barrier height may not reflect variations in the folding rate if some of the coordinates are poor ( yielding a low barrier ) or if there are large differences in kinetic prefactors , we have also directly computed the folding rate for the circular permutants of those proteins which misfold , and confirm that the rates of formation of the native fold and circular permutants are similar ., We indeed obtain a strong correlation between the folding rate o","headings":"Introduction, Materials and Methods, Results","abstract":"Recent single molecule experiments , using either atomic force microscopy ( AFM ) or F\u00f6rster resonance energy transfer ( FRET ) have shown that multidomain proteins containing tandem repeats may form stable misfolded structures ., Topology-based simulation models have been used successfully to generate models for these structures with domain-swapped features , fully consistent with the available data ., However , it is also known that some multidomain protein folds exhibit no evidence for misfolding , even when adjacent domains have identical sequences ., Here we pose the question: what factors influence the propensity of a given fold to undergo domain-swapped misfolding ?, Using a coarse-grained simulation model , we can reproduce the known propensities of multidomain proteins to form domain-swapped misfolds , where data is available ., Contrary to what might be naively expected based on the previously described misfolding mechanism , we find that the extent of misfolding is not determined by the relative folding rates or barrier heights for forming the domains present in the initial intermediates leading to folded or misfolded structures ., Instead , it appears that the propensity is more closely related to the relative stability of the domains present in folded and misfolded intermediates ., We show that these findings can be rationalized if the folded and misfolded domains are part of the same folding funnel , with commitment to one structure or the other occurring only at a relatively late stage of folding ., Nonetheless , the results are still fully consistent with the kinetic models previously proposed to explain misfolding , with a specific interpretation of the observed rate coefficients ., Finally , we investigate the relation between interdomain linker length and misfolding , and propose a simple alchemical model to predict the propensity for domain-swapped misfolding of multidomain proteins .","summary":"Multidomain proteins with tandem repeats are abundant in eukaryotic proteins ., Recent studies have shown that such domains may have a propensity for forming domain-swapped misfolded species which are stable for long periods , and therefore a potential hazard in the cell ., However , for some types of tandem domains , no detectable misfolding was observed ., In this work , we use coarse-grained structure-based folding models to address two central questions regarding misfolding of multidomain proteins ., First , what are the possible structural topologies of the misfolds for a given domain , and what determines their relative abundance ?, Second , what is the effect of the topology of the domains on their propensity for misfolding ?, We show how the propensity of a given domain to misfold can be correlated with the stability of domains present in the intermediates on the folding and misfolding pathways , consistent with the energy landscape view of protein folding ., Based on these observations , we propose a simplified model that can be used to predict misfolding propensity for other multidomain proteins .","keywords":"simulation and modeling, fluorophotometry, protein structure, thermodynamics, research and analysis methods, fluorescence resonance energy transfer, proteins, structural proteins, repeated sequences, molecular biology, spectrophotometry, free energy, physics, biochemistry, biochemical simulations, tandem repeats, protein domains, genetics, biology and life sciences, physical sciences, genomics, computational biology, spectrum analysis techniques, macromolecular structure analysis","toc":null} +{"Unnamed: 0":1198,"id":"journal.pcbi.1006514","year":2018,"title":"RNA3DCNN: Local and global quality assessments of RNA 3D structures using 3D deep convolutional neural networks","sections":"RNA molecules consist of unbranched chains of ribonucleotides , which have various essential roles in coding , decoding , regulation , expression of genes , and cancer-related networks via the maintenance of stable and specific 3D structures 1\u20135 ., Therefore , their 3D structural information would help fully appreciate their functions ., In this context , experiments such as X-ray crystallography , nuclear magnetic resonance ( NMR ) spectroscopy , and cryoelectron microscopy are the most reliable methods of determining RNA 3D structures , but they are costly , time-consuming , or technically challenging due to the physical and chemical nature of RNAs ., As a result , many computational methods have been developed to predict RNA tertiary structures 6\u201332 ., These methods usually have a generator producing a large set of structural candidates and a discriminator evaluating these generated candidates ., A good generator should be able to produce structural candidates as close to native structures as possible , and a good discriminator should be able to recognize the best candidates ., Moreover , a discriminator can direct generator searching structural space in heuristic prediction methods ., For protein or RNA tertiary structure prediction , a discriminator generally refers to a free energy function , a knowledge-based statistical potential , or a scoring function ., Several statistical potentials have been developed to evaluate RNA 3D structures , such as RASP 33 , RNA KB potentials 34 , 3dRNAscore 35 and the Rosetta energy function 9 , 16 ., Generally , these potentials are proportional to the logarithm of the frequencies of occurrence of atom pairs , angles , or dihedral angles based on the inverse Boltzmann formula ., The all-atom version of RASP defines 23 atom types , uses distance-dependent geometrical descriptions for atom pairs with a bin width of 1 \u00c5 , and is derived from a non-redundant set of 85 RNA structures ., The all-atom version of RNA KB potential defines 85 atom types , also uses distance-dependent geometrical descriptions for atom pairs , and is derived from 77 selected representative RNA structures ., Moreover , RNA KB potentials are fully differentiable and are likely useful for structure refinement and molecular dynamics simulations ., 3dRNAscore also defines 85 atom types and uses distance-dependent geometrical descriptions for atom pairs with a bin width of 0 . 15 \u00c5 , and is derived from an elaborately compiled non-redundant dataset of 317 structures ., In addition to distance-dependent geometrical descriptions , 3dRNAscore uses seven RNA dihedral angles to construct the statistical potentials with a bin width of 4 . 5\u00b0 , and the final output potentials are equal to the sum of the two energy terms with an optimized weight ., The Rosetta energy function has two versions: one for low resolution and the other for high resolution ., The low-resolution knowledge-based energy function explicitly describing the base-pairing and base-stacking geometries guides the Monte Carlo sampling process in Rosetta , while the more detailed and precise high-resolution all-atom energy function can refine the sampled models and yield more realistic structures with cleaner hydrogen bonds and fewer clashes ., As the paper on 3dRNAscore reported , 3dRNAscore is the best among these four scoring functions ., Overall , the choices of the geometrical descriptors and the reference states in the scoring functions can affect their performance significantly , and the optimization of the parameters also influences this ., Recently , we have witnessed astonishing advances in machine learning as a tool to detect , characterize , recognize , classify , or generate complex data and its rapid applications in a broad range of fields , from image classification , face detection , auto driving , financial analysis , disease diagnosis 36 , playing chess or games 37 , 38 , and solving biological problems 39\u201342 , to even quantum physics 43\u201345 ., Even this list is incomplete , and has the potential to be extended further in the future ., Therefore , we expect that machine learning methods will be able to help evaluate the structural candidates generated in the process of RNA tertiary structure prediction ., Inspired by the successful application of 2D convolutional neural networks ( CNNs ) in image classification , we believe that 3D CNNs are a promising solution in that RNA molecules can be treated as a 3D image ., Compared with other machine learning methods employing conventional hand-engineered features as input , 3D CNNs can directly use a 3D grid representation of the structure as input without extracting features manually ., 3D CNNs have been applied to computational biology problems such as the scoring of protein\u2013ligand poses 46 , 47 , prediction of ligand\u2013binding protein pockets 48 , prediction of the effect of protein mutations 49 , quality assessment of protein folds 50 , and prediction of protein\u2013ligand binding affinity 51 ., Here , we report our work on developing two new scoring functions for RNA 3D structures based on 3D deep CNNs , which we name RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., Our scoring functions enable both local and global quality assessments ., To our knowledge , this is the first paper to describe the use of 3D deep CNNs to assess the quality of RNA 3D structures ., We also tested the performance of our approaches and made comparisons with the four aforementioned energy functions ., The environment surrounding a nucleotide refers to its neighboring ., To determine the neighboring atoms of a nucleotide , a local Cartesian coordinate system is specified first by its atoms C1\u2019 , O5\u2019 , C5\u2019 , and N1 for pyrimidine or N9 for purine ., Specifically , the origin of the local coordinate system is located at the position of atom C1\u2019 ., The x- , y- , and z-axes of the local coordinate system , denoted as x , y , and z , respectively , are decided according to Eqs 1\u20136 where rC1\u2032 , rO5\u2032 , rC5\u2032 and rN stand for the vectors pointing from the origin in the global coordinate system to the atoms C1\u2019 , O5\u2019 , C5\u2019 , and N1 or N9 , respectively ., x = r N - r C 1 \u2032 ( 1 ), x = x \u2225 x \u2225 ( 2 ), y = r O 5 \u2032 + r C 5 \u2032 2 - r C 1 \u2032 ( 3 ), z = x \u00d7 y ( 4 ), z = z \u2225 z \u2225 ( 5 ), y = z \u00d7 x ( 6 ), The environment surrounding a nucleotide consists of the atoms whose absolute values of x , y , and z coordinates are less than a certain threshold ., Here , the threshold is set to 16 \u00c5 , which means that the environment surrounding a nucleotide contains the atoms within a cube of length 32 \u00c5 centered at this very nucleotide , as shown in Fig 1A ., For a colorful 2D image , the input of a 2D CNN is an array of pixels of RGB channels ., Similarly , in our work , the nucleotide and its surrounding environment are transformed into a 3D image consisting of an array of voxels ., As shown in Fig 1A , the box of size 32 \u00d7 32 \u00d7 32 \u00c5 is partitioned into 32 \u00d7 32 \u00d7 32 grid boxes ., Each grid box represents a voxel of three channels and its values are calculated by the accumulations of the occupation number , mass , or charge of the atoms in the grid box ., The mass and charge information of each type of atoms is listed in S1 Table ., After transformation , the input of the 3D CNN is a colorful 3D image of 32 \u00d7 32 \u00d7 32 voxels with three channels corresponding to RGB channels presented in Fig 1B ., Practically , each channel is normalized to 0 , 1 by min-max scaling ., The output of our CNN is the nucleotide unfitness score characterizing how poorly a nucleotide fits into its surroundings ., For a nucleotide , its unfitness score is equal to the RMSD of its surroundings plus the RMSD of itself after optimal superposition between its conformations in the native structure and the assessed structure ., The latter RMSD is generally very small , but the former varies in a large range ., Nucleotides with smaller unfitness scores are in a conformation closer to the native conformation , and a score of 0 means that the nucleotide fits into its surrounding environment perfectly and is in its native conformation ., Practically , the nucleotide unfitness score is normalized to 0 , 1 by min-max scaling ., For the global quality assessment , the unfitness scores of all nucleotides are accumulated ., Fig 1C exhibits the architecture of our CNN , a small VGG-like network 52 containing a stack of convolutional layers , a maxpooling layer , a fully connected layer , and 4 , 282 , 801 parameters in total ., VGGNet is a famous image classification CNN ., It is a very deep network and uses 19 weight layers , consisting of 16 convolutional layers stacked on each other and three fully-connected layers ., The input image size 224 \u00d7 224 in VGGNet is much larger than our input size 32 \u00d7 32 \u00d7 32 in terms of the side length , and thus we used a smaller architecture ., There are only four 3D convolutional layers in our neural network ., The numbers of filters in each convolutional layer are 8 , 16 , 32 , and 64 , and the receptive fields of the filters in the first two convolutional layers and in the last two convolutional layers are 5 \u00d7 5 \u00d7 5 voxels and 3 \u00d7 3 \u00d7 3 voxels , respectively ., The convolution stride is set to one voxel ., No spatial padding is implemented in the convolutional layers ., Moreover , a max-pooling layer of stride 2 is placed following the first two consecutive convolutional layers ., Subsequently , one fully connected layer with 128 hidden units is stacked after the convolutional layers ., The final output layer is a single number , namely , the unfitness score ., All units in hidden layers are activated by the ReLU nonlinear function , while the output layer is linearly activated ., The neural network was trained to reduce the mean squared error ( MSE ) between the true and predicted unfitness scores ., A back-propagation-based mini-batch gradient descent optimization algorithm was used to optimize the parameters in the network ., Batch size was set to 128 ., The training was regularized by dropout regularization for the second , fourth convolutional layers , and the fully connected layer with a dropout ratio of 0 . 2 ., The Glorot uniform initializer was used to initialize the network weights ., The learning rate was initially set to 0 . 05 , and then decreased by half whenever the MSE of the validation dataset stopped improving for five epochs ., The training process stopped when the learning rate decreased to 0 . 0015625 ., Our 3D CNN was implemented using the python deep learning library Keras 53 , with Theano library as the backend ., To construct the training dataset , first a list of 619 RNAs was downloaded with the search options \u201cRNA Only\u201d and \u201cNon Redundant RNA Structures\u201d from the NDB website http:\/\/ndbserver . rutgers . edu\/ , which means that our training dataset includes RNA-only structures and the RNAs are non-redundant in both sequence and geometry ., Second , the RNAs with an X-ray resolution >3 . 5 \u00c5 were removed from the list above ., Finally , the RNAs in the test dataset were removed and the RNAs in the equivalence classes with the test dataset were also removed ., \u201cStructures that are provisionally redundant based on sequence similarity and also geometrical similarity are grouped into one equivalence class , \u201d as Leontis et al . defined 54 ., Thus , 414 native RNAs were left to construct the training dataset ., According to their length , the 414 RNAs were randomly divided into two groups , namely , 332 RNAs for training and 82 RNAs for validation in the CNN training process ., Practically , the training samples were generated in two ways , namely , by MD and MC methods elaborated as follows ., To evaluate our CNN-based scoring function and make comparisons with the traditional statistical potentials , three test datasets were collected from different sources ., Test dataset I comes from the RASP paper 33 which is generated by the MODELLER computer program from the native structures of 85 non-redundant RNAs given a set of Gaussian restraints for dihedral angles and atom distances , and contains 500 structural decoys for each of the 85 RNAs ., The RMSDs are in different ranges for these RNAs ., The narrowest are from 0 to 3 . 5 \u00c5 , the broadest are from 0 to 13 \u00c5 , and the RMSDs of most decoys are less than 10 \u00c5 ., This dataset can be downloaded from http:\/\/melolab . org\/supmat\/RNApot\/Sup . _Data . html ., Test dataset II comes from the KB paper 34 , which is generated by both position-restrained dynamics and REMD simulations for 5 RNAs and the normal-mode perturbation method for 15 RNAs ., For the MD dataset , there are 3 , 500 decoys for each of four RNAs whose RMSDs range from 0 to >10 \u00c5 , and 2 , 600 decoys for one RNA ( PDB ID: 1msy ) whose RMSDs range from 0 to 8 \u00c5 ., Meanwhile , for the normal-mode dataset , there are about 490 decoys for each of the 15 RNAs , whose RMSDs range only from 0 to 5 \u00c5 ., This dataset can be downloaded from http:\/\/csb . stanford . edu\/rna ., One point that should be noted is that the downloaded pdb files name atom O2 in pyrimidine bases as \u201cO . \u201d, Test dataset III comes from RNA-Puzzles rounds I to III 55\u201357 , a collective and blind experiment in 3D RNA structure prediction ., Given the nucleotide sequences , interested groups submit their predicted structures to the RNA-Puzzles website before the experimentally determined crystallographic or NMR structures of these target sequences are published ., Therefore , the dataset is produced in a real RNA modeling scenario and can reveal the real performance of the existing scoring function ., Marcin Magnus compiled the submitted structures from rounds I to III , and now the predicted models of 18 target RNAs can be downloaded from https:\/\/github . com\/RNA-Puzzles\/RNA-Puzzles-Normalized-submissions ., There are only 12\u201370 predicted models for the 18 RNAs , some of whose RMSDs range from 2 to 4 \u00c5 , while some cover a wide range from 20 to 60 \u00c5 ., Two neural networks were trained based on two sets of training samples ., The first set included only MD training samples and the second set included both MD and MC training samples ., And the two network models are named RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., We tested test datasets I and II using RNA3DCNN_MD , and tested test dataset III using RNA3DCNN_MDMC ., The reason why we trained two neural networks is that the three test datasets come from two kinds of methods ., Test dataset I and II were produced by MD and normal-mode methods initiated from native structures , while test dataset III was produced by MC structure prediction methods , covering a broad structural space ., After testing , for test datasets I and II , RNA3DCNN_MD performed better than RNA3DCNN_MDMC ., But for test dataset III , RNA3DCNN_MDMC was superior ., The results are reasonable ., RNA3DCNN_MD is more accurate in the region close to native structures in that most of the MD training samples are not very far away from native structures or native topologies ., However , when MC training samples were included , the neural network RNA3DCNN_MDMC became not as accurate as RNA3DCNN_MD for the structures around native ones and biased the non-native ., On the contrary , RNA3DCNN_MD did not see the more random training structures far away from native states and thus it did not perform as well as RNA3DCNN_MDMC for test dataset III ., In general , a scoring function with good performance should be able to recognize the native structure from a pool of structural decoys and to rank near-native structures reasonably ., Consequently , two metrics were used for a quantitative comparison with other scoring functions ., One was the number of native RNAs with minimum scores in the test dataset , and the other was the Enrichment Score ( ES ) 34 , 35 , 58 , which characterizes the degree of overlap between the structures of the top 10% scores ( Etop10% ) and the best 10% RMSD values ( Rtop10% ) in the structural decoy dataset ., The ES is defined as, E S = | E t o p 10 % \u2229 R t o p 10 % | 0 ., 1 \u00d7 0 ., 1 \u00d7 N d e c o y s ( 7 ), where |Etop10% \u2229 Rtop10%| is the number of structures in both the lowest 10% score range and the lowest 10% RMSD range , and Ndecoys is the total number of structures in the decoy dataset ., If the score and RMSD are perfectly linearly correlated , ES is equal to 10 ., If they are completely unrelated , ES is equal to, 1 . If ES is less than 1 , the scoring function performs rather poorly with respect to that decoy dataset ., We compared our CNN-based scoring function with four traditional statistical potentials for RNA , namely , 3dRNAscore , KB , RASP , and Rosetta ., First , the number of native RNAs with minimum scores was counted as listed in Table, 1 . As the 3dRNAscore paper reported , 3dRNAscore identified 84 of 85 native structures , KB 80 of 85 , RASP 79 of 85 , and Rosetta 53 of 85 ., 3dRNAscore is thus clearly the best among the four statistical potentials ., Our RNA3DCNN identified 62 of 85 native structures , and the unidentified native structures generally had the second or third lowest scores , almost the same as the lowest scores ., Fig 2A shows an example in test dataset I in which the native structure was identified by our method , and Fig 2B shows an example in test dataset I in which the native structure had a slightly higher score calculated by our method than the structure of an RMSD of 0 . 9 \u00c5 ., The RMSD-score plots of all 85 examples are provided in S1 Fig . The result that our method identified fewer native structures is reasonable ., Specifically , the input and output of our neural network are geometry based , and thus similar structures have similar scores ., The structures in the 0\u20131 \u00c5 range generally resemble each other and thus , for our scoring function , all the non-native structures with minimum scores have an RMSD \u223c1 \u00c5 ., Meanwhile , for the statistical potentials , atom steric clashes , angle , or dihedral angle deviations from the native form may quickly increase the potential values ., Second , the ES was calculated ., The mean ES values of the 85 RNAs calculated by 3dRNAscore , RASP , Rosetta , and our method RNA3DCNN were 8 . 69 , 8 . 69 , 6 . 7 , and 8 . 61 , respectively ., The mean ES calculated by KB is not given in that we cannot open its original website and download its program , and the results of KB method shown in this paper come from the papers on KB and 3dRNAscore ., The ES values of 3dRNAscore and our method are almost the same ., The mean ES values of three methods are very large , suggesting that the RMSDs and scores calculated by the different methods are highly linearly correlated and that this test dataset is an easy benchmark to rank near-native decoys ., For the MD decoys in test dataset II , 3dRNAscore and KB identified 5 of 5 native structures , RASP 1 of 5 , Rosetta 2 of 5 , and our method 4 of 5 , as listed in Table, 1 . Our method gave the lowest score to the decoy of an RMSD of 0 . 97 \u00c5 for RNA 1f27 , as shown in Fig 3B ., The ES values of the MD decoys using different scoring functions are listed in Table, 2 . Fig 3A shows the relationship between RMSD and the score calculated by our method for the RNA 434d with the best ES ., The RMSD-score plots of all five examples are provided in S2 Fig . From the table , we can see that our method performed better than 3dRNAscore for 2 of 5 RNAs , slightly worse for 1 of 5 RNAs , and worse for 2 of 5 RNAs , especially for the RNA 1f27 , in that the native structure had a slightly higher score than the decoys of RMSD around 1 \u00c5 ., Moreover , our method performed better than KB , RASP , and Rosetta for 3 of 5 RNAs , comparably for 1 of 5 RNAs , and worse for the RNA 1f27 , as explained above ., For the normal-mode decoys in this dataset , 3dRNAscore identified 12 of 15 native structures , RASP 11 of 15 , Rosetta 10 of 15 , KB and our method 15 of 15 , as listed in Table, 1 . The ES values of the normal-mode decoys using different scoring functions are also listed in Table, 2 . From the table , we can see that our method performed better than 3dRNAscore for 7 of 15 RNAs , equally for 4 of 15 RNAs , and worse for only 4 of 15 RNAs ., Moreover , our method performed better than KB , RASP , and Rosetta for 12 , 11 , and 13 of 15 RNAs ., The mean ES values of 3dRNAscore and our method were the same , and were greater than the other scoring functions ., The RMSD-score plots of all 15 examples are provided in S2 Fig . The structures in test dataset III are derived from different groups by different RNA modeling methods ., There are only dozens of predicted models for each target RNA and the RMSDs are almost always greater than 10 \u00c5 , and often even greater than 20 , or 30 \u00c5 ., Consequently , we did not calculate the ES for this dataset and gave only the RMSDs of models with minimum scores in Table, 3 . The results of method KB were not provided in that we could not open its website and get the program ., From the table , we can see that our RNA3DCNN identified 13 of 18 native RNAs , 3dRNAscore 5 of 18 , RASP 1 of 18 , and Rosetta 4 of 18 ., For puzzle 2 , though the native structures were not identified , our method gave the lowest RMSD among four methods ., And for puzzle 3 , our method gave the RMSD as low as other two methods ., Fig 4A shows an example in test dataset III in which the native structure was well identified by our method , and Fig 4B is the one not identified ., The RMSD-score plots of all 18 examples are provided in S3 Fig . For test datasets I and II , all decoys are obtained from native structures , which means that they almost always stay around one local minimum in the energy landscape ., But for test dataset III , in the real modeling scenario , the structures are far from native topologies and are located at different local minima in the energy landscape ., For this reason , we trained two neural networks with two sets of training samples , that is , one set including only training samples from MD simulations initiated from native structures and another set including both MD training samples and MC training samples obtained in the broader and more complicated structural space ., Our scoring function can evaluate each nucleotide , reveal the regions in need of further structural optimization , and guide the sampling direction in RNA tertiary structure modeling ., Fig 5 portrays how our scoring function helps locate the unfit regions ., In this figure , a decoy of RMSD 3 . 0 \u00c5 from test dataset II MD decoys and the native RNA 1nuj are superimposed , and thicker tubes show larger deviations from the native structure ., The rainbow colors represent the calculated unfitness scores of each nucleotide , and the colors closer to red represent larger unfitness scores ., We can see that the tubes in nucleotides 1 , 7 , 8 , 9 , and 14 are much thicker , and the colors of those regions are much closer to red , which means that our scoring function can rank the nucleotide quality correctly ., Nucleotides 1 and 14 are the terminal nucleotides in two chains and are unpaired , so the deviations of these two are the largest ., Nucleotides 7\u20139 are in the internal loop , so the deviations are larger than those of the remaining helical regions ., The Pearson correlation coefficients between actual and predicted nucleotide unfitness scores were 0 . 69 and 0 . 34 for MD decoys and NM decoys in test dataset II , respectively , as shown in S4 Fig . The structures in NM decoys are all near native structures with RMSD ranging from 0 to 5 \u00c5 , thus making the correlation not strong ., Saliency maps were used to visualize the trained network and help understand which input atoms are important in deciding the final output ., In paper 59 , an image-specific class saliency map was first introduced to rank the pixels of an input 2D image based on their influence on the class score by computing the gradient of output class score with respect to the input image ., The gradient can reveal how sensitive the class score is to a small change in input image pixels ., Larger positive gradients mean that a slight decrease in the corresponding pixels can cause the true class score to drop markedly , and thus the corresponding pixels are more important in determining the right output class ., Meanwhile , for our regression problem and a near-native conformation , the smaller output was better and the voxels of negative gradients were highlighted and important ., Moreover , we mapped the gradients of each voxel back to the corresponding atoms ., In Fig 6 , examples of saliency maps for the three input channels are presented ., A , B , and C correspond to atomic occupation number , mass , and charge channels , respectively ., The example is used to calculate the unfitness score of the 12th nucleotide in a helical region for the native RNA 1nuj ., The nucleotide under assessment is drawn as spheres and sticks , its surrounding environment is drawn as sticks , while the atoms beyond its surrounding environment are shown as a black cartoon ., The redder atoms represent smaller negative gradients , the bluer atoms represent larger positive gradients , and the nearly white atoms represent gradients close to 0 ., The red regions are highlighted and more important in deciding the final output ., In the atomic occupation number channel , atomic category differences disappear and only shapes count ., From Fig 6A , we can see that the atoms in the nucleobases of the 10th\u201313th and 15th\u201319th nucleotides are highlighted and atom N3 in the 16th nucleotide is the most important , in accordance with the base-pairing and base-stacking interactions ., In the atomic mass channel , the importance of atoms in the nucleobases described above declines somewhat , while atom P in the 12th nucleotide and atom N3 in the 16th nucleotide are the most important , in that atom P is much heavier than atoms C , N , and O and atom N3 is in the A12\u2019s paired-base U16 ., In the atomic charge channel , the seven most important atoms are N1 , P , N3 , and O3\u2019 in the 12th nucleotide , atoms C4 and C2 in the 16th nucleobase , and atom N2 in the 17th nucleobase ., Overall , from the analyses of the salient maps , it was found that the neural networks can learn the knowledge , such as the relevance of base pairing and stacking interactions to the score , from the training data automatically without any priori knowledge ., It would be very interesting to see if neural networks can dig new knowledge out of data in the future work ., We tested the computational time of 100 decoys of 91 nucleotides ., The total time was 321 . 0 seconds ., For a comparison , the C++ version of 3dRNAscore method took only 19 seconds ., However , it was found that 99 . 6% of our computational time ( 319 . 7 seconds ) was used to prepare the input to CNN , and this time decreased to 2 seconds after we changed the code from Python to C++ ., Therefore , the CNN-based approach is very efficient in terms of speed , and it is estimated that the overall computational time of our method will be approximately 3 seconds if we rewrite the entire code in C++ ., However , the computational time of our method in Python version is acceptable for now , at least temporarily ., We postpone the code rewriting work to the future when necessary ., Moreover , our method can be downloaded from https:\/\/github . com\/lijunRNA\/RNA3DCNN ., Recently , we have witnessed the astonishing power of machine learning methods in characterizing , classifying , and generating complex data in various fields ., It is therefore interesting to explore the potential of machine learning in characterizing and classifying RNA structural data ., In this study , we developed two 3D CNN-based scoring models , named RNA3DCNN_MD and RNA3DCNN_MDMC , for assessing structural candidates built by two kinds of methods ., If the structural candidates are generated by MC methods such as fragment assembly , RNA3DCNN_MDMC is suggested ., If the structural candidates are not very far away from the native structures , such as from MD simulations , the RNA3DCNN_MD model is better ., We also compared our method with four other traditional scoring functions on three test datasets ., The current 3D CNN-based approaches performed comparably with or better than the best statistical potential 3dRNAscore on different test datasets ., For the first test dataset , the mean ES was almost the same as that of the best traditional scoring function , 3dRNAscore ., The reason why the number of native structures identified by our method was much smaller than that by other scoring functions is that our method is structure-based and the scores of native structures and decoys of RMSD less than 1 . 0 \u00c5 are almost the same ., This suggests that our method is robust if an RNA structure does not change much ., For the second test dataset , our method generally performed similarly to 3dRNAscore and outperformed the three other scoring functions ., For the MD decoys in the second test dataset , our method was slightly worse than 3dRNAscore ., For the normal-mode decoys in the second test dataset , our method identified all the native structures , while 3dRNAscore identified only 12 of 15 native RNAs , and our method outperformed 3dRNAscore for 7 of 15 RNAs and underperformed it for only 4 of 15 RNAs ., For the third test dataset from blind and real RNA modeling experiments , our method was far superior to the other scoring functions in identifying the native structures ., Our method has some novel features ., First , it is free of the choice of the reference state , which is a difficult problem in traditional statistical potentials ., Second , it treats a cube of atoms as a unit like a many-body potential , while traditional statistical potentials divide them into atom pairs ., Moreover , our method can evaluate each nucleotide , reveal the regions in need of further structural optimization , and guide the sampling direction in RNA tertiary structure prediction ., Our method demonstrates the power of CNNs in quality assessments of RNA 3D structures and shows the potential to far outperform traditional statistical potentials ., There remains great scope to improve the CNN models , such as by expanding them to include more input channels ( only three are considered currently ) , featuring more complex network architecture , and involving larger training datasets ., Moreover , more RNA-related problems can be dealt with by 3D CNNs , such as protein\u2013RNA binding affinity prediction and RNA\u2013ligand docking and virtual screening .","headings":"Introduction, Materials and methods, Results and discussion","abstract":"Quality assessment is essential for the computational prediction and design of RNA tertiary structures ., To date , several knowledge-based statistical potentials have been proposed and proved to be effective in identifying native and near-native RNA structures ., All these potentials are based on the inverse Boltzmann formula , while differing in the choice of the geometrical descriptor , reference state , and training dataset ., Via an approach that diverges completely from the conventional statistical potentials , our work explored the power of a 3D convolutional neural network ( CNN ) -based approach as a quality evaluator for RNA 3D structures , which used a 3D grid representation of the structure as input without extracting features manually ., The RNA structures were evaluated by examining each nucleotide , so our method can also provide local quality assessment ., Two sets of training samples were built ., The first one included 1 million samples generated by high-temperature molecular dynamics ( MD ) simulations and the second one included 1 million samples generated by Monte Carlo ( MC ) structure prediction ., Both MD and MC procedures were performed for a non-redundant set of 414 RNAs ., For two training datasets ( one including only MD training samples and the other including both MD and MC training samples ) , we trained two neural networks , named RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., The former is suitable for assessing near-native structures , while the latter is suitable for assessing structures covering large structural space ., We tested the performance of our method and made comparisons with four other traditional scoring functions ., On two of three test datasets , our method performed similarly to the state-of-the-art traditional scoring function , and on the third test dataset , our method was far superior to other scoring functions ., Our method can be downloaded from https:\/\/github . com\/lijunRNA\/RNA3DCNN .","summary":"RNA is an important and versatile macromolecule participating in various biological processes ., In addition to experimental approaches , the computational prediction of RNA 3D structures is an alternative and important source of obtaining structural information and insights into their functions ., An important part of these computational prediction approaches is structural quality assessment ., For this purpose , we developed a 3D CNN-based approach named RNA3DCNN ., This approach uses raw atom distributions in 3D space as the input of neural networks and the output is an RMSD-based nucleotide unfitness score for each nucleotide in an RNA molecule , thus making it possible to evaluate local structural quality ., Here , we tested and made comparisons with four other traditional scoring functions on three test datasets from different sources .","keywords":"molecular dynamics, neural networks, particle physics, statistics, rna structure prediction, neuroscience, nucleotides, atoms, mathematics, forecasting, composite particles, research and analysis methods, computer and information sciences, rna structure, mathematical and statistical techniques, chemistry, molecular biology, physics, biochemistry, rna, molecular structure, nucleic acids, biology and life sciences, physical sciences, computational chemistry, chemical physics, statistical methods, macromolecular structure analysis","toc":null} +{"Unnamed: 0":14,"id":"journal.pcbi.1005103","year":2016,"title":"Forecasting Human African Trypanosomiasis Prevalences from Population Screening Data Using Continuous Time Models","sections":"Human African trypanosomiasis ( HAT ) , also known as sleeping sickness , is a parasitic disease that is caused by two sub-species of the protozoa Trypanosoma brucei: Trypanosoma brucei gambiense ( gambiense HAT ) and Trypanosoma brucei rhodesiense ( rhodiense HAT ) ., The infection causing the disease is transmitted from person to person through the tsetse fly ., It is estimated that there were 20000 cases in the year 2012 1 and that 70 million people from 36 Sub-Saharan countries are at risk of HAT infection 2 , 3 ., Our work focuses on gambiense HAT , which represents 98% of all HAT cases 3 ., Gambiense HAT , which we will refer to as \u201cHAT\u201d from now on , is a slowly progressing disease and is fatal if left untreated ., In the first stage of the disease , symptoms are usually absent or non-specific 4 ., The median duration of this stage is about 1 . 5 years 5 ., By the time patients arrive at a healthcare provider , the disease has often progressed to the neurological phase , which causes severe health problems ., In addition , this treatment delay increases the rate of transmission , since an infected patient is a potential source of infection for the tsetse fly 4 , 6 ., Therefore , active case finding and early treatment are key to the success of gambiense HAT control 7 , 8 ., The current case finding strategy uses mobile teams that travel from village to village to conduct exhaustive population screening 4 , 8 , 9 ., For example , 35 mobile teams are active in the Democratic Republic of the Congo ( DRC ) ., Because this strategy has considerably reduced disease prevalence in several African countries 6 , 10\u201312 , the disease is no longer perceived as a major threat ., Consequently , donors are now scaling down their financial commitments 8 ., This , however , poses a serious risk to the control of HAT ., The disease tends to re-emerge when screening activities are scaled down , bringing about the risk of a serious outbreak , as shown by an epidemic in the 1990s 4 , 11 , 13 ., For example , the number of cases in 1998 is estimated to have exceeded 300000 3 ., In order to minimize the risk of re-emergence when resources are scaled down , and in order to eliminate and eradicate the disease , maximizing the effectiveness of the control programs is crucial ., Mpanya et al . 9 suggest that the effectiveness of population screening is determined by ( among others ) the management and planning of the mobile teams ., Planning decisions\u2014which determine which villages to screen , and at what time interval to screen them\u2014have a direct impact on the risk and the magnitude of an outbreak ., Existing literature does not address these issues , as highlighted by the WHO 1 , and a wide variety of screening intervals have been applied in different control programs 12 , 14 , 15 ., To optimize the planning decisions , it is of key importance to be able to predict the evolution of the HAT prevalence level in the villages at risk ., This allows decision makers to assess the relative effectiveness of a screening round in these villages and to prioritize the screening rounds to be performed ., However , practical tools for predicting HAT prevalence appear to be lacking ., Existing models for HAT are mostly based on differential equations , describing the rate of change for the HAT prevalence level among humans and flies as a function of the prevalence levels among humans and flies ( some models also include an animal reservoir ) 16\u201322 ., As the information needed to use such models\u2014e . g . , the number of tsetse flies in a village\u2014is not available on the village level , using these models for prediction is impractical ., This paper therefore sets out to develop practical models describing and predicting the expected evolution of the HAT prevalence level in a given village , based on historical information on HAT cases and screening rounds in that village ., The main difference with the models mentioned in the previous paragraph is that our models make no assumptions about the causal factors underlying the observed prevalence levels: the \u201cinflow\u201d of newly infected persons and the \u201coutflow\u201d of infected persons by cure or death ., Instead , we just consider data on the net effect of these two processes\u2014the evolution of the prevalence level\u2014and fit five different models to this ., To analyze the predictive performance of these models , we make use of a dataset describing screening operations and HAT cases in the Kwamouth district in the DRC for the period 2004\u20132013 ., Furthermore , we use one of the models to analyze the fixed frequency screening policy , which assigns to each village a fixed time interval for consecutive screening rounds ., Specifically , we investigate screening frequency requirements for reaching elimination and eradication ., Here , eradication is defined as \u201cletting the expected prevalence level go to zero in the long term\u201d , and elimination is defined as \u201creaching an expected prevalence level of one case per 10000\u201d ., Our paper thereby contributes to the branch of research on control strategies for HAT ., Next , we list several other papers that are highly related to our work ., The effectiveness of active case finding operations is analyzed by Robays et al . 23 , who define \u201ceffectiveness\u201d as the expected fraction of cases in a village which will eventually get cured as a result of a screening round in that village ., The papers by Stone & Chitnis 16 , Chalvet-Monfray et al . 20 , and Artzrouni & Gouteux 24 introduce differential equation models to gain structural insights on the effectiveness of combinations of active case finding and vector control efforts and on the requirements for eradicating HAT ., The effect of active case finding activities is modeled through a continuous \u201cflow\u201d of infected individuals into the susceptible compartment ., Since we explicitly model the timing and the effects of a screening round , this is one of the main differences with our paper ., Finally , Rock et al . 10 study the effectiveness of screening and treatment programs and the time to elimination using a multi-host simulation model ., Their paper , however , considers the screening frequency as a given , whereas we consider the effects of changing this frequency ., Furthermore , we propose models for predicting prevalence on a village level , whereas their model implicitly assumes all villages to be homogeneous ., Our dataset consists of information on screening operations in the period 2004\u20132013 in the health zone Kwamouth in the province Bandundu ., The raw data were cleaned up based on the rules described in S1 Text ., The number of villages in the dataset equals 2324 , and 143 of these villages were included in the data analysis based on three criteria: ( 1 ) the number of screening rounds recorded was at least two , ( 2 ) at least one case has been detected over the time horizon , and ( 3 ) at least one record of the number of people screened during the operation was available ., The first condition is necessary to enable modeling the prevalence level observed in a given screening round as a function of past observed prevalence levels , and the third condition is necessary for estimating prevalence itself ., We estimate the prevalence level in a village at the time of a screening round as the number of cases detected in that round over the number of people participating in that round ., Furthermore , lacking population size data , we estimate the population of a village as the maximum number of people participating in a screening round reported for that village ., Though our dataset also contains cases identified by the regular health system in between successive screening rounds , these do not yield ( direct ) estimates of prevalence levels in the corresponding villages , as required by the models proposed in the next section ., We therefore focus on the active case finding data only ., The total number of screening rounds reported for the 143 villages included equals 766 ( on average 5 . 4 per village ) ., Fig 1 shows cumulative distributions of the observed prevalence level in these screening rounds ( mean 0 . 0055 , median 0 . 0011 , standard deviation 0 . 0121 ) , the time interval between each pair of consecutive screening rounds ( mean 1 . 28 , median 1 . 00 , standard deviation 1 . 03 ) , the estimated population for each village ( mean 1073 , median 450 , standard deviation 2046 ) , and the participation level in the screening rounds ( mean 0 . 69 , median 0 . 72 , standard deviation 0 . 27 ) ., Note that the relatively large number of observations with a participation level of 100% is due to the method used to estimate the population sizes ., Before we propose our prediction methods , we introduce some notations ., A table of the most important notations used in this article can be found in S1 Table ., Let sv = {sv1 , sv2 , \u2026} denote the vector of screening time intervals for village v , where sv1 denotes the time between the start of the time horizon and the first screening for this village , sv2 denotes the time between the first and the second screening , and so on ., The time at which the nth screening is performed is given by Svn = \u2211m\u2264n svm and the participation fraction in this screening round is denoted by pvn ., Parameter Nv represents the population size of village v . Furthermore , let iv represent historical information on HAT cases in this village: the numbers of cases detected during past screening rounds ., We model the expected prevalence level at time t in village v as a function fv ( \u22c5 ) of sv , iv , and some parameters \u03b2: fv ( t , sv , iv , \u03b2 ) ., Note that the expected prevalence level is a latent , i . e . unobserved , variable , and that the observed prevalence level , xv ( t ) , generally deviates from the expected value ., We measure prevalence levels fv ( \u22c5 ) and xv ( \u22c5 ) as fractions and represent the difference between the expected and observed prevalence level in village v by the random variable \u03b5v:, x v ( t ) = f v ( t , s v , i v , \u03b2 ) + \u03b5 v ( 1 ) Time series models such as discrete time ARMA , ARIMA or ARIMAX models seem to be the most popular methods for predicting prevalence ( or incidence ) ( see e . g . 25 , 26 ) ., These models describe the prevalence level at time t as a linear function of the prevalence levels at time t \u2212 1 , t \u2212 2 , \u2026 and ( optionally ) some other variables ., Their applicability in our context is however limited ., Discrete time models require estimates of the prevalence level at each time unit ( e . g . , each month ) , whereas information to estimate the HAT prevalence level is available only at moments at which a screening round is performed ., Namely , many HAT patients are not detected by the regular health system , particularly if they are in the first stage of the disease 8 ., The class of continuous time models is much more suitable for analyzing data observed at irregularly spaced times ., These models assume that the variable of interest , fv ( t , sv , iv , \u03b2 ) , follows a continuous process , defining its value at each t > 0 ., The next subsections propose five continuous time models for predicting HAT prevalence levels ., We again note that models describing the causal processes determining the observed prevalence levels in detail ( e . g . , by explicitly modelling disease incidence , passive case finding , death and cure ) may be most intuitive , but require data that are not available on a village level ., Therefore , to safeguard their relevance for practical application , the variables we include are only those that are available on a large scale ., This does not imply that our models neglect the causal processes ., Instead , they are to some extent accounted for in an implicit way by fitting the models to the observed prevalence levels ., Data that are typically available at village level are numbers of HAT cases found during screening rounds and the times of these screening rounds ., For a given village , the first yields estimates of past prevalence levels , and the latter yield the time intervals between past screening rounds ., We hypothesize that the current expected prevalence level at time t is related to past prevalence levels , past screening intervals , and in particular the time since the last screening round , which we denote by \u03b4 v - ( t ) = min n { t - S v n | S v n \u2264 t } ., Hence , we include ( functions of ) these variables in our models ., Linear regression models are very widely used in the world of forecasting ( see e . g . 27 ) ., Major advantages of these models are that they are easy to understand , to implement , to fit , and to analyze ., Therefore , the first model we introduce is a linear model ( model 1 ) , which also serves as a benchmark for our more advanced models ., This model describes the expected HAT prevalence in a given village as a function of the time since the last screening and past prevalence levels ., Such linear model is , however , very vulnerable to a typical structure present in active case finding datasets ., High past prevalence levels tend to increase the priority of screening a village , causing the time intervals between screening rounds to decrease ., As a result , \u03b4 v - ( t ) is a highly \u201cendogenous\u201d variable ., More formally , external variables ( past prevalence levels ) are correlated with both the dependent variable ( fv ( t ) ) and the independent variable ( \u03b4 v - ( t ) ) , which makes it hard to quantify the ( causal ) relation between them ., In response to this , we present four alternative models ., Model 2 is a fixed effects model , which adds a dummy-variable for each village to the initial model ., Model 3 is a ( non-linear ) exponential growth and decay model which is inspired by the SIS epidemic model ., This model is being used extensively for modeling epidemics that are characterized by an initial phase in which the number of infected individuals grows exponentially , and a second phase in which this number levels off to a time-invariant carrying capacity ., We refer to model 3 as the logistic model with a constant carrying capacity ., Finally , model 4 is a less data dependent version of model 3 and model 5 is variant of model 3 in which the carrying capacity is allowed to vary over time ., As HAT prevalence levels are very low , the variance of these levels is high , which enhances the chance that there are significant outliers among the observations ., For example , no cases were detected in three out of four screening rounds performed in a village of 122 people , whereas five cases were detected in the 4th round ., This implies two things ., First , observed prevalence levels will generally deviate significantly from expected prevalence levels ., Second , we need to choose a technique for estimating the model coefficients that is robust with respect to outliers ., Instead of Least Squares ( LS ) regression , one of the most commonly applied model fitting methods , we therefore use Least Absolute Deviations ( LAD ) regression to fit the model parameters , which is known to be relatively insensitive to outlying observations 32 ., An alternative technique would be to use a maximum likelihood estimation ( MLE ) approach based on a heavy-tailed probability distribution for the observed prevalence levels ., In S2 Table , we show the results obtained when assuming a Poisson , Beta-Binomial , or Negative Binomial distribution ., Each of the MLE approaches is , however , clearly outperformed by the LAD regression approach ., The variance of the observed prevalence level strongly depends on the sample size ., For example , under the assumption of an independent infection probability for each person , the variance is inversely proportional to the sample size ., We therefore weight the fitting deviation evn = fv ( Svn ) \u2212 xv ( Svn ) for observation n for village v by weight w v n = N v \u00b7 p v n , yielding the following weighted LAD regression problem:, min \u03b2 S a b s ( \u03b2 ) = \u2211 ( v , n ) w v n | e v n | ( 13 ) To deal with the risk of overfitting , we select the variables to be included in the models by means of a backward elimination method ., This method initially includes all variables in the model and iteratively removes the least significant variable ( if its p-value > 0 . 10 ) and estimates the model with the remaining variables ., The algorithm stops as soon as all remaining variables are significant or if only one variable is left ., We enforce that \u03b1v and \u03ba cannot be removed by the backward elimination method so as to preserve essential elements of the corresponding models ., Hence , only parameters \u03b21-\u03b28 in models 1 and 2 , and parameters \u03b21-\u03b22 in models 3\u20135 could be removed ., Finally , to test the predictive performance of the models , we split the data in an estimation sample ( which we use for fitting the model ) and a prediction sample ., Specifically , for each of the 143 villages , we include the last screening round in the prediction sample , and include the others in the estimation sample ., Next , we measure performance based on the mean of the prediction errors M E = \u2211 v e v n ^ | V | , indicating whether the predictions obtained by the model are biased , and based on two indicators for the amount of explained variation in the prevalence levels: the mean absolute error , M A E = \u2211 v | e v n ^ | | V | and the mean relative error , M R E = \u2211 v | e v n ^ | \u2211 v x v ( S v n ^ ) ., Here , the index combination v n ^ indicates the last screening round for village v . The intuition behind the measures of explained variation is that they equal 0 if the predicted prevalence levels are exactly equal to the observed prevalence levels ( i . e . , the model perfectly explains the variation in the observed prevalence levels ) and that their value increases when the absolute difference between predicted and observed levels increases ( i . e . when the model explains less variation in observed prevalence levels ) ., We use Matlab R2015b for the implementation of our methods ., Table 1 presents the coefficient estimates for the variables of the five presented models ., The results for models 1 and 2 are very similar ., Seven of the eight variables are identified as being non-significant by the backward elimination algorithm: the interaction terms , the long term prevalence level , the time since the last screening round , and the square root of the time since the last screening round ., The resulting model provides a clear prediction method: the expected prevalence equals 24 . 5% of the prevalence level observed at the previous screening round ( note , if this level was 0 . 0% , the estimated expected prevalence remains 0 . 0% ) according to model 1 , and equals 14 . 7% of this prevalence level plus a constant fraction \u03b1v according to model 2 ., Hence , this model predicts that , in the absence of screening activities , the expected prevalence remains the same over time ., The fitted models 3 , 4 , and 5 reveal a clear and intuitive relationship between screening frequency , prevalence , and carrying capacity: a larger historical prevalence indicates a higher carrying capacity , and facing an equal historical prevalence for a higher historical screening frequency indicates a higher carrying capacity ., The constant term has been identified as non-significant for models 3 and 4 and as significant for model 5 ., To illustrate the typical output of models 3 and 4 , Fig 2 shows the development of the expected prevalence levels for two villages over time ( the lines ) , as well as the observed prevalence levels ( stars and circles ) ., Furthermore , Fig 3 depicts the carrying capacities for the 143 villages in Kwamouth , as estimated by the LMCCC model ., Though data to validate these estimates are lacking , we note that they are in the same order of magnitude as prevalence levels found during screening rounds ., The latter are usually between 1% and 5% in high or very high transmission areas , and exceed 10% in some extreme cases 33 , 34 ., As mentioned in Section Model Fitting , we measure the predictive performance of the different models in terms of the prediction bias and in terms of the amount of variation explained ., Table 2 contains the values of the different indicators for each of the models , and Fig 4 and S1 Fig . compare the prediction errors produced by the different models ., These prompt several interesting observations ., First , the prediction bias ranges from 0 . 47\/1000 ( rLMCCC model ) to -1 . 86\/1000 ( LM model ) ., Given that the average observed prevalence in the 766 screening rounds in our dataset equals 5 . 5\/1000 , we consider the biases of the LM model and the LMVCC model as quite substantial ., Yet , this may be very well explained by the highly variable character of the HAT epidemic ., A small number of outbreaks may substantially shift the average observed prevalence level ., For example , without the four most negative prediction errors , the prediction bias for the LM model would be only -0 . 69\/1000 ., Second , the LM model performs relatively well in terms of explained variation ., Yet , we see two vulnerabilities of this model: ( 1 ) as discussed before , this model is likely to be hampered by endogeneity , inducing a potential bias in the coefficient estimates , and ( 2 ) the variation in the screening intervals is relatively small for the villages with the highest endemicity levels in our sample , as these villages are screened almost every year ., When there is little variation in \u03b4 v - ( t ) , the true effects of variations might not become visible ., These two fundamental vulnerabilities may very well explain why ( a function of ) \u03b4 v - ( t ) has not been identified as significant for the LM model ., As a result , this model unrealistically predicts that the value of the expected prevalence level remains the same over time in the absence of screening activities , contrasting with vast historical evidence ., The same vulnerabilities apply to the FEM model , which also provides a counter-intuitive relation between the expected prevalence and \u03b4 v - ( t ) ., On top of that , its predictive power is relatively low , which could be explained by the fact that , for many villages , there is insufficient data to estimate the fixed effect accurately ., As variants of the logistic model already fix the structure of the relationship between \u03b4 v - ( t ) and f v ( S v n + \u03b4 v - ( t ) ) based on epidemiological insights , these models do not suffer from the vulnerabilities mentioned above ., We therefore consider these models to have most potential for accurately predicting HAT prevalence levels in general ( i . e . , in any region and for any time horizon ) ., Among the three logistic model variants , model 3 ( LMCCC ) performs reasonably well in terms of both criteria ., Model 5 ( LMVCC ) has a substantial prediction bias , but performs best in terms of explained variation , as can be seen in Fig 4 ( its performance is closest to the \u201cperfect fit\u201d ) ., Though model 4 ( rLMCCC ) performs best in terms of prediction bias , it performs very weakly in terms of explained variation ., Hence , among the logistic model variants , there is no clear winner when both criteria are assigned equal importance ., For planning decisions , however , we consider model 5 to be most suitable , followed by model 3 ., The reason is that , in contrast with prediction bias , explained variation indicates the ability to identify differences in expected prevalence levels between villages , as required for effective planning decisions ., Hence , identifying an effective prioritization of the different villages will be more important than obtaining unbiased estimates of the resulting prevalence levels ., The sensitivity level s is known to differ between regions 35 ., Furthermore , the population size of a village had to be estimated , which induces a potential bias in the participation level estimates ., These issues beg to question the robustness of our results on the logistic model variants ( note , models 1 and 2 are not affected by this as these do not use these parameters ) ., S3 Table shows the results of a sensitivity analysis , which largely confirm our findings ., In all scenario\u2019s analyzed , model 5 remains best in terms of explained variation , followed by model 3 , and models 3 and 4 outperform model 5 in terms of prediction bias ., Another assumption that questions the robustness of our results is the one about the expected prevalence level at the beginning of the time horizon ( i . e . , at 01-01-2004 ) ., S4 Table provides the results of a sensitivity analysis on this assumption ., Again our main findings remain the same ., In the previous section we argue that , among the models analyzed in this paper , variants of the logistic model have most potential for accurately predicting HAT prevalence levels in general ., In this section we demonstrate the applicability of one of these model variants to analyze the effectiveness of screening operations ., In particular , since information on the development of the carrying capacities is lacking , as required by the LMVCC model , and since we consider the predictive performance of model 3 superior to that of model 4 , we choose to use the LMCCC model as a basis for this analysis ., We do note that the theoretical results presented here also hold for model 4 and , if the carrying capacity remains constant , for model 5 also ., Our analysis will concentrate on the fixed frequency screening policy ., This policy assigns to each village a fixed time interval for consecutive screening rounds based on the village\u2019s characteristics ., As the policy is relatively easy to understand and implement , it has been the basis for guideline documents for HAT control ., For example , the WHO recommends a screening interval of one year for villages reporting at least one case in the past three years , and an interval of 3 years for villages that did not report a case in the last three years , but did report at least one case during the past five years 1 ., In the first part of this section , we mathematically analyze the impact of a fixed screening policy for a given village and investigate the screening frequency required to eradicate HAT in that village ., As mentioned in the introduction , we define that HAT is eradicated in the long term if the expected prevalence level goes to zero in the long term ., A shorter term objective is to eliminate HAT , where elimination is defined as having at most one new case per 10000 persons per year 1 , 7 ., For example , the WHO\u2019s roadmap towards elimination of HAT states the aim to eliminate ( gambiense ) HAT as a public health problem by 2020\u2014which is defined as having less than one new case per 10000 inhabitants in at least 90% of the disease foci 1\u2014and to reach worldwide elimination by 2030 ., The second part of this section presents analytical results about the time needed to reach elimination and about the screening frequency requirements for reaching elimination within a given time frame ., As our models consider expected prevalence instead of incidence , we redefine elimination as \u201creaching an expected prevalence level of one case per 10000\u201d ., We argue that the times and efforts required to reach this elimination target are practically suitable lower bounds on the times and efforts needed to reach the WHO\u2019s targets ., First , incidence and prevalence levels are argued to be \u201ccomparable\u201d for HAT if mobile units visit afflicted areas infrequently 16 ., If mobile teams visit the areas more frequently , incidence will only become larger compared to prevalence and the prevalence level target will be easier to achieve than the incidence level target ( e . g . , under the assumption that the fraction of flies infected is proportional to the fraction of humans infected , this follows directly from the epidemic model presented by Rogers 22 ) ., Second , even if the expected prevalence level is below the defined threshold level , the intrinsic variability of the HAT epidemic may induce an actual prevalence level that exceeds this threshold ., Throughout this section , we consider an imaginary village with a constant carrying capacity K . ( For sake of conciseness we omit the subscript v in this section ) ., Furthermore , we assume a constant participation level pvn = p , 0 < p < 1 , and a fixed screening interval \u03c4 ., The expected prevalence level at the beginning of the time horizon is denoted by f ( 0 ) , f ( 0 ) > 0 ., Finally , recall that s , 0 < s < 1 , denotes the sensitivity level ., This paper introduces and analyzes five models for predicting HAT prevalence in a given village based on past observed prevalence levels and past screening activities in that village ., Based on the quality of prevalence level predictions in 143 villages in Kwamouth ( DRC ) , and based on the theoretical foundation underlying the models , we conclude that variants of the logistic model\u2014a model inspired by the SIS model\u2014are most practically suitable for predicting HAT prevalence levels ., Sensitivity analyses show that this conclusion is very robust with respect to assumptions about participation levels , the sensitivity of the diagnostic test , or the initialization value of the prevalence curves are violated ., Second , we demonstrate the applicability of one variant of the logistic model to analyze the effectiveness of the fixed frequency screening policy , which assigns to each village a fixed time interval for consecutive screening rounds ., Due to the intrinsic variability of the HAT epidemic , observed prevalence levels will generally deviate significantly from predicted prevalence levels ., We strongly believe , however , that this does not render predictions worthless in the context of planning decisions ., In contrast , a major contribution of our models is that they indicate the expected disease burden in different villages and can hence be applied to develop planning policies that aim to minimize the total expected disease burden for the villages considered ., Our analysis of the fixed frequency screening policy reveals that eradication of HAT is to be expected in the long term when the screening interval is smaller than a given threshold ., This threshold strongly depends on the case detection fraction: the fraction of cases who participate in the screening rounds and are detected by the diagnostic tests ., Under current conditions , we estimate the threshold to be approximately 15 months ., This suggests that annual screening , as recommended by the WHO for endemic areas , will eventually lead to eradication ., More specifically , our model predicts that annual screening will lead to eradication if the case detection fraction exceeds 55% ., The logistic model also reveals expressions for the time needed to reach the more short term target of eliminating HAT and for the screening interval required to eliminate HAT within a given time frame ., These suggest that it takes 10 years to eliminate HAT in a village or focus with a prevalence of 5\/1000 ( under current conditions and annual screening ) ., Furthermore , we estimate that it is only feasible to reach elimination within five years if the case detection fraction is very high\u2014roughly above 75%\u2014or if the current prevalence level is very low\u2014roughly below 1\/1000 ., We argue that these figures are practically suitable lower bounds on the time or efforts needed to reach the WHO\u2019s targets for elimination ., Our results on requirements for eradication or elimination are based on a deterministic model , which begs to question their validity for reality , where events are stochastic ., We note , however , that we model the expected behavior of a stochastic system , and hence that our results also hold in expectation for the stochastic system ., On the other hand , we acknowledge that our models are not perfect ., For example , we neglect interaction effects between neighboring villages ., It would therefore be interesting and relevant to investigate whether our results can be reproduced by a validated simulation model ., A necessary condition for the applicability of our prediction models is that data about possibl","headings":"Introduction, Materials and Methods, Results, Discussion","abstract":"To eliminate and eradicate gambiense human African trypanosomiasis ( HAT ) , maximizing the effectiveness of active case finding is of key importance ., The progression of the epidemic is largely influenced by the planning of these operations ., This paper introduces and analyzes five models for predicting HAT prevalence in a given village based on past observed prevalence levels and past screening activities in that village ., Based on the quality of prevalence level predictions in 143 villages in Kwamouth ( DRC ) , and based on the theoretical foundation underlying the models , we consider variants of the Logistic Model\u2014a model inspired by the SIS epidemic model\u2014to be most suitable for predicting HAT prevalence levels ., Furthermore , we demonstrate the applicability of this model to predict the effects of planning policies for screening operations ., Our analysis yields an analytical expression for the screening frequency required to reach eradication ( zero prevalence ) and a simple approach for determining the frequency required to reach elimination within a given time frame ( one case per 10000 ) ., Furthermore , the model predictions suggest that annual screening is only expected to lead to eradication if at least half of the cases are detected during the screening rounds ., This paper extends knowledge on control strategies for HAT and serves as a basis for further modeling and optimization studies .","summary":"The primary strategy to fight gambiense human African trypanosomiasis ( HAT ) is to perform extensive population screening operations among endemic villages ., Since the progression of the epidemic is largely influenced by the planning of these operations , it is crucial to develop adequate models on this relation and to employ these for the development of effective planning policies ., We introduce and test five models that describe the expected development of the HAT prevalence in a given village based on historical information ., Next , we demonstrate the applicability of one of these models to evaluate planning policies , presenting mathematical expressions for the relationship between participation in screening rounds , sensitivity of the diagnostic test , endemicity level in the village considered , and the screening frequency required to reach eradication ( zero prevalence ) or elimination ( one case per 10000 ) within a given time-frame ., Applying these expressions to the Kwamouth health zone ( DRC ) yields estimates of the maximum screening interval that leads to eradication , the expected time to elimination , and the case detection fraction needed to reach elimination within five years ., This paper serves as a basis for further modeling and optimization studies .","keywords":"medicine and health sciences, infectious disease epidemiology, african trypanosomiasis, tropical diseases, parasitic diseases, health care, mathematics, forecasting, statistics (mathematics), screening guidelines, neglected tropical diseases, infectious disease control, research and analysis methods, public and occupational health, infectious diseases, zoonoses, epidemiology, mathematical and statistical techniques, protozoan infections, trypanosomiasis, differential equations, health care policy, physical sciences, statistical methods","toc":null} +{"Unnamed: 0":1569,"id":"journal.pcbi.1005763","year":2017,"title":"Probabilistic models for neural populations that naturally capture global coupling and criticality","sections":"We represent the response of a neural population with a binary vector s = {s1 , s2 , \u2026 , sN} \u2208 {0 , 1}N identifying which of the N neurons elicited at least one action potential ( \u20181\u2019 ) and which stayed silent ( \u20180\u2019 ) during a short time window ., Our goal is to build a model for the probability distribution of activity patterns , p, ( s ) , given a limited number M of samples , D = { s ( 1 ) , \u2026 , s ( M ) } , observed in a typical recording session ., The regime we are mainly interested in is the one where the dimensionality of the problem is sufficiently high that the distribution p cannot be directly sampled from data , i . e . , when 2N \u226b M . Note that we are looking to infer models for the unconditional distribution over neural activity patterns ( i . e . , the population \u201cvocabulary\u201d ) , explored in a number of recent papers 8 , 9 , 11 , 13\u201318 , 24 , 34 , rather than to construct stimulus-conditional models ( i . e . , the \u201cencoding models\u201d , which have a long tradition in computational neuroscience 1\u20133 ) ., Previous approaches to modeling globally coupled populations focused on the total network activity , also known as synchrony , K ( s ) = \u2211 i = 1 N s i ., The importance of this quantity was first analyzed in the context of probabilistic models in Ref 11 where the authors showed that a K-pairwise model , which generalizes a pairwise maximum entropy model by placing constraints on the statistics of K, ( s ) , is much better at explaining the observed population responses of 100+ salamander retinal ganglion cells than a pairwise model ., Specifically , a pairwise model assumes that the covariance matrix between single neuron responses , Cij = \u2329sisj\u232a , which can be determined empirically from data D , is sufficient to estimate the probability of any population activity pattern ., In the maximum entropy framework , this probability is given by the most unstructured ( or random ) distribution that reproduces exactly the measured Cij:, p ( s ; J ) = 1 Z ( J ) exp ( \u2211 i , j = 1 N J i j s i s j ) , ( 1 ), where Z ( J ) is a normalization constant , and J is a coupling matrix which is chosen so that samples from the model have the same covariance matrix as data ., Note that because s i 2 = s i , the diagonal terms Jii of the coupling matrix correspond to single neuron biases , i . e . firing probabilities in the absence of spikes from other neurons ( previous work 11 used a representation si \u2208 {\u22121 , 1} for which the single neuron biases need to be included as separate parameters and where Jii are all 0 ) ., A K-pairwise model generalizes the pairwise model and has the form, p ( s ; J , \u03d5 ) = 1 Z ( J , \u03d5 ) exp ( \u2211 i , j = 1 N J i j s i s j + \u2211 k = 0 N \u03d5 k \u03b4 k , K ( s ) ) ., ( 2 ), The coupling matrix J has the same role as in a pairwise model while the additional parameters \u03d5 are chosen to match the probability distribution of K, ( s ) under the model to that estimated from data ., The \u201cpotentials\u201d \u03d5k introduced into the K-pairwise probabilistic model , Eq ( 2 ) , globally couple the population , and cannot be reduced to low-order interactions between , e . g . , pairs or triplets , of neurons , except in very special cases ., We will generically refer to probabilistic models that impose non-trivial constraints on population-level statistics ( of which the distribution of total network activity K is one particular example ) as \u201cglobally coupled\u201d models ., Here we introduce new semiparametric energy-based models that extend the notion of global coupling ., These models are defined as follows:, p ( s ; \u03b1 , V ) = e - V ( E ( s ; \u03b1 ) ) Z ( \u03b1 , V ) , ( 3 ), where E ( s; \u03b1 ) is some energy function parametrized by \u03b1 , and V is an arbitrary increasing differentiable function which we will refer to simply as the \u201cnonlinearity . \u201d, The parametrization of the energy function should be chosen so as to reflect local interactions among neurons ., Crucially , while it is necessary to choose a specific parametrization of the energy function , we do not make any assumptions on the shape of the nonlinearity\u2014we let the shape be determined nonparametrically from data ., Fig 1 schematically displays the relationship between the previously studied probabilistic models of population activity and two semiparametric energy-based models that we focus on in this paper , the semiparametric independent model ( which we also refer to as \u201cV ( independent ) \u201d ) and the semiparametric pairwise model ( which we also refer to as \u201cV ( pairwise ) \u201d ) ., Our motivation for introducing the global coupling via the nonlinearity V traces back to the argument made in Ref 11 for choosing to constrain the statistics of synchrony , K, ( s ) ; in short , the key intuition in earlier work has been that K, ( s ) is a biologically relevant quantity which encodes information about the global state of a population ., There are , however , many other quantities whose distributions could contain signatures of global coupling in a population ., In particular , while most energy functions\u2014e . g . , the pairwise energy function , E ( s; J ) = \u2212\u2211i , j Jijsisj\u2014are defined solely in terms of local interactions between small groups of neurons , the statistics of these same energy functions ( for instance , their moments ) are strongly shaped by global effects ., Specifically , we show in Methods that the role of the nonlinearity in Eq ( 3 ) is precisely to match the probability density of the energy under the model to that estimated from data ., In other words , once any energy function for Eq ( 3 ) has been chosen , the nonlinearity V will ensure that the distributions of that particular energy in the model and over data samples agree ., Constraining the statistics of the energy E ( s; \u03b1 ) is different from constraining the statistics of K, ( s ) , used in previous work ., First , the energy depends on a priori unknown parameters \u03b1 which must be learned from data ., Second , while K, ( s ) is always an integer between 0 and N , the energy can take up to 2N distinct values; this allows for extra richness but also requires us to constrain the ( smoothed ) histogram of energy rather than the probability of every possible energy value , to prevent overfitting ., As we discuss next , the statistics of the energy are also closely related to criticality , a formal , model-free property distinguishing large , globally-coupled neural populations ., The notion of criticality originates in thermodynamics where it encompasses several different properties of systems undergoing a second-order phase transition 35 ., Today , many other phenomena , such as power-law distributed sizes of \u201cavalanches\u201d in neural activity , have been termed critical 20 ., Our definition , which we discuss below , is a restricted version of the thermodynamic criticality ., We consider a sequence of probability distributions { p N } N = 1 \u221e over the responses of neural populations of increasing sizes , N . These probability distributions define the discrete random variable s ( the population response ) , but they can also be thought of simply as functions which map a population response to a number between 0 and 1 ., Combining these two viewpoints , we can consider a real-valued random variable pN, ( s ) \u2208 ( 0 , 1 ) which is constructed by applying the function pN to the random variable s ., The behavior of this random variable as N \u2192 \u221e is often universal , meaning that some of its features are independent of the precise form of pN ., As is conventional , we work with the logarithm of pN, ( s ) instead of the actual distribution ., We call a population \u201ccritical\u201d if the standard deviation of the random variable log pN, ( s ) \/N does not vanish as the population size becomes large , i . e ., 1 N \u03c3 ( log p N ( s ) ) \u219b 0 as N \u2192 \u221e ., ( 4 ), ( For completeness , we further exclude some degenerate cases such as when the probability density of log pN, ( s ) \/N converges to two equally sized delta functions ., ) The above definition is related to criticality as studied in statistical physics ., In thermodynamics , \u03c3 ( log p N ( s ) ) \/ N is proportional to the square root of the specific heat , which diverges in systems undergoing a second-order phase transition ., While at a thermodynamical critical point \u03c3 ( log pN, ( s ) ) \/N scales as N\u2212\u03b3 with \u03b3 \u2208 ( 0 , 1\/2 ) , here we are concerned with the extreme case of \u03b3 = 0 ., Rather than being related to second-order phase transitions , this definition of criticality is related to the so-called Zipf law 23 ., A pattern s can be assigned a rank by counting how many other patterns have a higher probability ., In its original form , a probability distribution is said to satisfy Zipf law if the probability of a pattern is inversely proportional to its rank ., No real probability distribution is actually expected to satisfy this definition precisely , but there is a weaker form of Zipf law which concerns very large populations , and which is much less restrictive ., This weaker form can be stated as a smoothed version of the original Zipf law ., Consider patterns whose rank is in some small interval r , r + \u0394N , and denote pN, ( r ) the average probability of these patterns ., We generalize the notion of Zipf law to mean that for very large populations pN, ( r ) \u221d r\u22121 ( \u0394N is assumed to go to zero sufficiently quickly with N ) ., As shown in Ref 23 , a system is critical in the sense of Eq ( 4 ) precisely when it follows this generalized Zipf law ., Practically speaking , no experimentally studied population ever has an infinite size , and a typical way to check for signs of criticality is to see if a log-log plot of a pattern probability versus its rank resembles a straight line with slope \u22121 ., Most systems are not expected to be critical ., The simplest example is a population of identical and independent neurons ,, p N ( s ) = q \u2211 i = 1 N s i ( 1 - q ) N - \u2211 i = 1 N s i , ( 5 ), where q is the probability of eliciting a spike ., For such population ,, 1 N \u03c3 ( log p N ( s ) ) = 1 N q ( 1 - q ) log q 1 - q , ( 6 ), which vanishes for very large number of neurons , and so the system is not critical ., More generally , if pN, ( s ) can be factorized into a product of probability distributions over smaller subpopulations which are independent of each other and whose number is proportional to N , then log pN, ( s ) \/N turns into an empirical average whose standard deviation is expected to vanish in the large N limit , and the population is not critical ., Reversing this argument , signatures of criticality can be interpreted as evidence that the population is globally coupled , i . e . that it cannot be decomposed into independent parts ., These preliminaries establish a direct link between criticality and semiparametric energy models of Eq ( 3 ) ., Nonlinearity in semiparametric energy models makes sure that the statistics of the energy E ( s; \u03b1 ) , and , since V ( E ) is monotone , also the statistics of log p ( s; \u03b1 , V ) are modeled accurately ( see Methods ) ., Because the behavior of log probability is crucial for criticality , as argued above , semiparametric energy models can capture accurately and efficiently the relevant statistical structure of any system that exhibits signs of criticality and\/or global coupling ., To fully specify semiparametric energy models , we need a procedure for constructing the nonlinearity V ( E ) ., We cannot let this function be arbitrary because then the model could learn to assign nonzero probabilities only to the samples in the dataset , and hence it would overfit ., To avoid such scenarios , we will restrict ourselves to functions which are increasing ., We also require V ( E ) to be differentiable so that we can utilize its derivatives when fitting the model to data ., The class of increasing differentiable functions is very large ., It includes functions as diverse as the sigmoid , 1\/ ( 1 + exp ( \u2212E ) ) , and the square root , E ( for positive E ) , but we do not want to restrict ourselves to any such particular form\u2014we want to estimate V ( E ) nonparametrically ., Nonparametric estimation of monotone differentiable functions is a nontrivial yet very useful task ( for example , consider tracking the height of a child over time\u2014the child is highly unlikely to shrink at any given time ) ., We follow Ref 36 and restrict ourselves to the class of strictly monotone twice differentiable functions for which V\u2032\u2032\/V\u2032 is square-integrable ., Any such function can be represented in terms of a square-integrable function W and two constants \u03b31 and \u03b32 as, V ( E ) = \u03b3 1 + \u03b3 2 \u222b E 0 E exp ( \u222b E 0 E \u2032 W ( E \u2032 \u2032 ) d E \u2032 \u2032 ) d E \u2032 , ( 7 ), where E0 is arbitrary and sets the constants to \u03b31 = V ( E0 ) , \u03b32 = V\u2032 ( E0 ) ., The function is either everywhere increasing or everywhere decreasing ( depending on the sign of \u03b32 ) because the exponential is always positive ., Eq ( 7 ) is easier to understand by noting that V ( E ) is a solution to the differential equation V\u2032\u2032 = WV\u2032 ., This means , for example , that on any interval on which W = 0 , the equation reduces to V\u2032\u2032 = 0 , and so V ( E ) is a linear function on this interval ., If V ( E ) is increasing ( V\u2032 > 0 ) , it also shows that the sign of W at a given point determines the sign of the second derivative of V at that point ., An advantage of writing the nonlinearity in the form of Eq ( 7 ) is that we can parametrize it by expanding W in an arbitrary basis without imposing any constraints on the coefficients of the basis vectors yet V ( E ) is still guaranteed to be monotone and smooth ., In particular , we will use piecewise-constant functions for W . This allows us to use unconstrained optimization techniques for fitting our models to data ., We start by considering one of the simplest models of the form Eq ( 3 ) , the semiparametric independent model:, p ( s ; \u03b1 , V ) = e - V ( - \u2211 i = 1 N \u03b1 i s i ) Z ( \u03b1 , V ) ., ( 8 ), If V were a linear function , the model would reduce to an independent model , i . e . a population of independent neurons with diverse firing rates ., In general , however , V introduces interactions between the neurons that may not have a straightforward low-order representation ., When fitted to our data , the nonlinearity V turns out to be a concave function ( see later sections on more complex models for a detailed discussion of the shape of the nonlinearity ) ., Note that if V had a simple functional form such as a low order polynomial , then the model Eq ( 8 ) would be closely related to mean field models of ferromagnetism with heterogenous local magnetic field studied in physics ., Our first goal is to use this simple model to verify our intuition that the nonlinearity helps to capture criticality ., Many population patterns are observed several times during the course of the experiment , and so it is possible to estimate their probability simply by counting how often they occur in the data 19 ., Given this empirical distribution , we construct a corresponding Zipf plot\u2014a scatter plot of the frequency of a pattern vs its rank ., For systems which are close to critical , this should yield a straight line with slope close to \u22121 on a log-log scale ., We repeat the same procedure with samples generated from a semiparametric independent model as well as an independent model , which were both fitted to the responses of all 160 neurons ., Fig 2 shows all three scatter plots ., The independent model vastly deviates from the empirical Zipf plot; specifically , it greatly underestimates the probabilities of the most likely states ., In contrast , the learned semiparametric independent model follows a similar trend to that observed in data ., This does not mean that the semiparametric independent model itself is an excellent model for the detailed structure in the data , but it is one of the simplest possible extensions of the trivial independent model that qualitatively captures both global coupling and the signatures of criticality ., Since the semiparametric independent model is able to capture the criticality of the data distribution , we also expect it to accurately model other features of the data which are related to the globally coupled nature of the population ., To verify this , Fig 3A compares the empirical probability distribution of the total activity of the population K ( s ) = \u2211i si to that predicted by the semiparametric independent model ., The match is very accurate , especially when compared to the same distribution predicted by the independent model ., This result goes hand in hand with the analysis in 39 which showed that interactions of all orders ( in our case mediated by the nonlinearity ) are necessary to model the wide-spread distribution of the total activity ., The independent model is a maximum entropy model which constrains the mean responses , \u2329si\u232a , of all neurons ., In other words , neurons sampled from the model would have the same firing rates as those in the data ( up to sampling noise ) ., Even though the semiparametric independent model is strictly more general , it does not retain this property when the parameters \u03b1 and the nonlinearity V are learned by maximizing the likelihood of data ., Fig 3B demonstrates this point: although the predicted firing rates are approximately correct , there are slight deviations ., On the other hand , the nonlinearity induces pairwise correlations between neurons which is something the independent model by construction cannot do ., Fig 3C compares these predicted pairwise correlations to their data estimates ., While there is some correlation between the predicted and observed covariances , the semiparametric independent model often underestimates the magnitude of the covariances and does not capture the fine details of their structure ( e . g . the largest covariance predicted by the semiparametric independent model is about 5\u00d7 smaller than the largest covariance observed in the data ) ., This is because a combination of independent terms and a single nonlinearity does not have sufficient expressive power , motivating us to look for a richer model ., One way to augment the power of the semiparametric independent model that permits a clear comparison to previous work is by means of the semiparametric pairwise model:, p ( s ; J , V ) = 1 Z ( J , V ) exp ( - V ( - \u2211 i , j = 1 N J i j s i s j ) ) ., ( 9 ), We fit this model to the responses of the various subpopulations of the 160 neurons , and we compare the resulting goodness-of-fit to that of a pairwise ( Eq ( 1 ) ) , K-pairwise ( Eq ( 2 ) ) , and semiparametric independent model ( Eq ( 8 ) ) ., We measure goodness-of-fit as the improvement of the log-likelihood of data per neuron under the model relative to the pairwise model , as shown in Fig 4A ., This measure reflects differences among models rather than differences among various subpopulations ., The semiparametric pairwise model consistently outperforms the other models and this difference grows with the population size ., To make sure that this improvement is not specific to this particular experiment , we also fitted the models to two additional recordings from the salamander retina which were also collected as part of the study 11 ., One consists of 120 neurons responding to 69 repeats of a 30 second random checkerboard stimulus , and the other of 111 neurons responding to 98 repeats of a 10 second random full-field flicker stimulus ., As shown in Fig 4B , the improvements of individual models on these datasets are consistent with the ones observed for the population stimulated with a natural movie ., The advantage of using likelihood as a goodness-of-fit measure is its universal applicability which , however , comes hand-in-hand with the difficulty of interpreting the quantitative likelihood differences between various models ., An alternative comparison measure that has more direct relevance to neuroscience asks about how well the activity of a single chosen neuron can be predicted from the activities of other neurons in the population ., Given any probabilistic model for the population response , we use Bayes rule to calculate the probability of the ith neuron spiking ( si = 1 ) or being silent ( si = 0 ) conditioned on the activity of the rest of the population ( s\u2212i ) as, p ( s i | s - i ; \u03b1 ) = p ( s ; \u03b1 ) p ( s i = 1 , s - i ; \u03b1 ) + p ( s i = 0 , s - i ; \u03b1 ) ., ( 10 ), We turn this probabilistic prediction into a nonrandom one by choosing whether the neuron is more likely to spike or be silent given the rest of the population , i . e ., s i ( s - i ; \u03b1 ) = argmax s i \u2208 { 0 , 1 } p ( s i | s - i ; \u03b1 ) ., ( 11 ), In Fig 4C and 4D we compare such predictive single neuron models constructed from semiparametric pairwise , K-pairwise , pairwise , and semiparametric independent models learned from the data for populations of various sizes ., Specifically , we ask how often these models would make a mistake in predicting whether a chosen single neuron has fired or not ., Every population response in our dataset corresponds to 20 ms of an experiment and so we can report this accuracy as number of errors per unit of time ., Predictions based on the semiparametric pairwise model are consistently the most accurate ., Fig 5A shows the nonlinearities of the semiparametric pairwise models that we learned from data ., In order to compare the nonlinearities inferred from populations of various sizes , we normalize the domain of the nonlinearity as well as its range by the number of neurons ., Even though the nonlinearities could have turned out to have e . g . a sigmoidal shape , the general trend is that they are concave functions whose curvature\u2014and thus departure from the linear V that signifies no global coupling\u2014grows with the population size ., The shape of these nonlinearities is reproducible over different subnetworks of the same size with very little variability ., To further visualize the increasing curvature , we extrapolated what these nonlinearities might look like if the size of the population was very large ( the black curve in Fig 5A ) ., This extrapolation was done by subtracting an offset from each curve so that V ( 0 ) = 0 , and then fitting a straight line to a plot of 1\/N vs . the value of V at points uniformly spaced in the function\u2019s domain ., The plots of 1\/N vs . V are only linear for N \u2265 80 , and so we only used these points for the extrapolation which is read out as the value of the fit when 1\/N = 0 ., To quantify the increasing curvature , Fig 5B shows the average absolute value of the second derivative of V across the function\u2019s domain ., The coupling matrix J of both the pairwise and the semiparametric pairwise models describes effective interactions between neurons , and so it is interesting to ask how the couplings predicted by these two models are related ., While Fig 5C shows a strong dependency between the couplings in a network of N = 160 neurons , the dependency is not deterministic and , moreover , negative couplings tend to be amplified in the semiparametric pairwise model as compared to the pairwise model ., Similarly to the semiparametric independent model , there is no guarantee that the semiparametric pairwise model will reproduce observed pairwise correlations among neurons exactly , even though pairwise model has this guarantee by virtue of being a maximum entropy model ., Fig 5D shows that despite the lack of such a guarantee , semiparametric pairwise model predicts a large majority of the correlations accurately , with the possible exceptions of several very strongly correlated pairs ., This is simply because the semiparametric paiwise model is very accurate\u2013the inset of Fig 5D shows that it can also reproduce third moments of the responses ., A K-pairwise model also has this capability but , as shown in Ref 11 , a pairwise model systematically mispredicts higher than second moments ., Suppose we use the semiparametric pairwise model to analyze a very large population which is not globally coupled and can be divided into independent subpopulations ., The only way the model in Eq ( 9 ) can be factorized into a product of probability distributions over the subpopulations is if the function V is linear ., Therefore , the prior knowledge that the population is not globally coupled immediately implies the shape of the nonlinearity ., Similarly , a prior knowledge that the population is critical also carries a lot of information about the shape of the nonlinearity ., We show in Methods that if the parameters \u03b1 are known , then the optimal nonlinearity in Eq ( 3 ) can be explicitly written as, V ( E ) = log \u03c1 \u00af ( E ; \u03b1 ) - log p \u00af ^ ( E ; \u03b1 ) , ( 12 ), where \u03c1 \u00af ( E ; \u03b1 ) is the density of states which counts the number of patterns s whose energy is within some narrow range E , E + \u0394 ., The density of states is a central quantity in statistical physics that can be estimated also for neural activity patterns either directly from data or from inferred models 19 ., Similarly , p \u00af ^ ( E ; \u03b1 ) is the empirical probability density of the energy E ( s; \u03b1 ) smoothed over the same scale \u0394 ., Eq ( 12 ) follows from the relation p \u00af ^ ( E ; \u03b1 ) \u221d \u03c1 \u00af ( E ; \u03b1 ) exp ( - V ( E ) ) , i . e . the probability of some energy level is just the number of states with this energy times the probability of each of these states ( see Methods ) ., We would like to establish a prior expectation on what the large N limit of the nonlinearites in Fig 5A is ., Adapting the same normalization as in the figure , we denote \u03f5 ( s; \u03b1 ) = E ( s; \u03b1 ) \/N ., Changing variables and rewriting Eq ( 12 ) in terms of the empirical probability density of the normalized energy p \u03f5 \u00af ^ ( \u03f5 ) = N p \u00af ^ ( \u03f5 N ; \u03b1 ) yields, V ( \u03f5 N ) = log \u03c1 \u00af ( \u03f5 N ; \u03b1 ) - log p \u03f5 \u00af ^ ( \u03f5 ) + log N ., ( 13 ), For a system where si can take on two states , the total number of possible activity patterns is 2N , and so we expect the log of the density of states to be proportional to N . If the system is critical , then by virtue of Eq ( 4 ) \u03c3 ( log pN ( s ) ) is proportional to N , and similarly we also expect \u03c3 ( E ( s; \u03b1 ) ) \u221d N . This means that \u03c3 ( \u03f5 ( s; \u03b1 ) ) = \u03c3 ( E ( s; \u03b1 ) ) \/N converges to some finite , nonzero number , and therefore log p \u03f5 \u00af ^ ( \u03f5 ) also stays finite no matter how large the population is ., Taken together , for large critical populations , the first term on the right hand side of Eq ( 13 ) is the only one which scales linearly with the population size , and hence it dominates the other terms:, V ( E ) \u2248 log \u03c1 \u00af ( E ; \u03b1 ) ., ( 14 ), One of our important results is thus that for large critical populations , the nonlinearity should converge to the density of states of the inferred energy model ., In other words , for critical systems as defined in Eq ( 4 ) , there is a precise matching relation between the nonlinearity V ( E ) and the energy function E ( s; \u03b1 ) ; in theory this is exact as N \u2192 \u221e , but may hold approximately already at finite N . To verify that this is the case for our neural population that has previously been reported to be critical , we compare in Fig 6A the nonlinearity inferred with the semiparametric pairwise model ( Fig 5A ) to the density of states estimated using a Wang and Landau Monte Carlo algorithm 40 for a sequence of subpopulations of increasing size ., As the population size increases , the nonlinearity indeed approaches the regime in which our prediction in Eq ( 14 ) holds ., This convergence is further quantified in Fig 6B which shows the average squared distance between the density of states and the nonlinearity ., The average is taken over the range of observed energies ., The nonlinearities are only specified up to an additive constant which we chose so as to minimize the squared distance between the density of states and the nonlinearity ., The link between global coupling and criticality is related to recent theoretical suggestions 28 , 29 , where global coupling between the neurons in the population emerges as a result of shared latent ( fluctuating ) variables that simultaneously act on extensive subsets of neurons ., In particular , Ref 28 theoretically analyzed models with a multivariate continuous latent variable h distributed according to some probability density q ( h ) , whose influence on the population is described by the conditional probability distribution, p N ( s | h ) = e - \u2211 j h j O j ( N ) ( s ) Z N ( h ) , ( 15 ), where ZN ( h ) is a normalization constant , and O j ( N ) ( s ) are global quantities which sum over the whole population ., The authors showed that under mild conditions on the probability density q ( h ) of h , and the scaling of O j ( N ) ( s ) with N , the sequence of models, p N ( s ) = \u222b q ( h ) p N ( s | h ) d h ( 16 ), is critical in the sense of Eq ( 4 ) ., If the latent variable is one-dimensional , i . e . h = h , then the models in Eq ( 16 ) have exactly the form of models in Eq ( 3 ) with E ( s; \u03b1 ) = O ( s ) , i . e . given a probability density q ( h ) of the latent variable , we can always find a nonlinearity V ( E ) such that, 1 Z ( \u03b1 ) e - V ( E ( s ; \u03b1 ) ) = \u222b 0 \u221e q ( h ) e - h E ( s ; \u03b1 ) Z ( h ; \u03b1 ) d h ., ( 17 ), The reverse problem of finding a latent variable for a given function V ( E ) such that this equation is satisfied does not always have a solution ., The condition for this mapping to exist is that the function exp ( \u2212V ( E ) ) is totally monotone 41 , which , among other things , requires that it is convex ., While our models allow for more general nonlinearites , we showed in Fig 5A that the inferred functions V ( E ) are concave and so we expect this mapping to be at least approximately possible ( see below ) ., The mapping in Eq ( 17 ) is based on a Laplace transformation , a technique commonly used for example in the study of differential equations ., Laplace transformations are also often used in statistical physics where they relate the partition function of a system to its density of states ., While the mathematics of Laplace transformations yields conditions on the function V ( E ) so that it is possible to map it to a latent variable ( i . e . , exp ( \u2212V ( E ) ) must be totally monotone ) , analytically constructing this mapping is possible only in very special cases ., We can gain a limited amount of intuition for this mapping by considering the case when the latent variable h is a narrow gaussian with mean h0 and variance \u03c32 ., For small \u03c32 , one can show that, V ( E ) \u2248 h 0 E - \u03c3 2 ( E - E 0 ) 2 , ( 18 ), where E0 is the average energy if \u03c32 = 0 , and the approximation holds only in a small neighborhood of E0 ( |E \u2212 E0| \u226a \u03c3 ) ., This approximation shows that the curvature of V ( E ) is proportional to the size of the fluctuations of the latent variable which , in turn , is expected to correlate with the amount of global coupling among neurons ., This relationship to global coupling can be understood from the right hand side of Eq ( 17 ) ., When the energy function is , for example , a weighted sum of individual neurons as in the semiparametric independent model of Eq ( 8 ) , then we can think of Eq ( 17 ) as a latent variable h ( perhaps reflecting the stimulus ) coupled to every neuron , and hence inducing a coupling between the whole population ., A non-neuroscience example is that of a scene with s representing the luminance in each pixel , and the latent h representing the lighting conditions which influence all the pixels simultaneously ., We used the right hand side of Eq ( 17 ) ( see Methods ) to infer the shapes of the probability densities of the latent variables which correspond to the nonlinearities in the semiparametric pairwise models learned from data ., These probability densities are shown in Fig 6C ., A notable difference to the formulation in Eq ( 16 ) is that the inferred latent variables scale with the population size; in particular , the inset to Fig 6C shows that the entropy of the inferred latent variable increases with the population size ., Entropy is a more appropriate measure of the \u201cbroadness\u201d of a probability density than standard deviation when the density is multimodal ., Taken together with the results in Fig 4A , this suggests that global coupling is especially important for larger populations ., However , it is also possi","headings":"Introduction, Results, Discussion, Methods","abstract":"Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations ., Recent studies have shown that the summed activity of all neurons strongly shapes the population response ., A separate recent finding has been that neural populations also exhibit criticality , an anomalously large dynamic range for the probabilities of different population activity patterns ., Motivated by these two observations , we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical ., These models consist of an energy function which parametrizes interactions between small groups of neurons , and an arbitrary positive , strictly increasing , and twice differentiable function which maps the energy of a population pattern to its probability ., We show that:, 1 ) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons;, 2 ) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity;, 3 ) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data ., Our method is independent of the underlying system\u2019s state space; hence , it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality .","summary":"Populations of sensory neurons represent information about the outside environment in a collective fashion ., A salient property of this distributed neural code is criticality ., Yet most models used to date to analyze recordings from large neural populations do not take this observation explicitly into account ., Here we aim to bridge this gap by designing probabilistic models whose structure reflects the expectation that the population is close to critical ., We show that such principled approach improves previously considered models , and we demonstrate a connection between our models and the presence of continuous latent variables which is a recently proposed mechanism underlying criticality in many natural systems .","keywords":"linguistics, social sciences, random variables, neuroscience, covariance, probability distribution, mathematics, statistics (mathematics), thermodynamics, entropy, animal cells, probability density, probability theory, physics, statistical models, cellular neuroscience, cell biology, neurons, biology and life sciences, cellular types, physical sciences, computational linguistics","toc":null} +{"Unnamed: 0":2198,"id":"journal.pcbi.1003011","year":2013,"title":"Data-Driven Modeling of Src Control on the Mitochondrial Pathway of Apoptosis: Implication for Anticancer Therapy Optimization","sections":"Protein tyrosine kinases of the Src family are involved in multiple facets of cell physiology including survival , proliferation , motility and adhesion 1 ., Their deregulation has been described in numerous malignancies such as colorectal , breast , melanoma , prostate , lung or pancreatic cancers and is known to favor tumorigenesis and tumor progression 2\u20134 ., Modulation of apoptosis sensitivity by Src deregulation is more controversial ., We recently described that Src activation promotes resistance to the mitochondrial pathway of apoptosis in mouse and human cancer cell lines 5 ., The molecular mechanism underlying such resistance involved the accelerated degradation of the proapoptotic BH3-only protein Bik ., Indeed , in Src-transformed NIH 3T3 mouse fibroblasts , Bik was found to be phosphorylated by activated Erk1\/2 , which was followed by Bik subsequent polyubiquitylation and proteasomal degradation 5 ., Thus in Src-transformed cells , Bik downregulation compromised Bax activation and mitochondrial outer membrane ( MOM ) permeabilization upon an apoptotic stress 5 ., That observation might be of importance since MOM permeabilization is the key step that commits cells to apoptosis ., Indeed , MOM permeabilization leads to the irreversible release of cytochrome c and other cytotoxic molecules from the mitochondrial inter-membrane space into the cytosol 6 , 7 ., Once released , cytochrome c induces the formation of the apoptosome complex , which triggers caspase activation , these molecules being the main executioners of the apoptotic program ., MOM permeabilization is triggered by the insertion and oligomerization of the pro-apoptotic effector Bax into the membrane 8\u201311 ., Antiapoptotic proteins such as Bcl-2 or Bcl-xL prevent this process , whereas pro-apoptotic BH3-only proteins contribute to Bax activation 6 , 11\u201316 ., Using western blotting and specific shRNAs , the respective contribution of the different Bcl-2 family members to the cell response triggered by various death- inducing agents was assessed in parental and Src-transformed NIH-3T3 fibroblasts 5 ., Experimentally and mathematically investigating the cell response to death-inducing agents might be of interest since it has long been postulated that restoration of apoptosis might be an effective way to selectively kill cancer cells ., The rationale of this assumption is that cancer cells need to counteract the pro-apoptotic effect of oncogenes such as Myc or E2F-1 that stimulate cell proliferation as well 17 ., Moreover , Src deregulation has specifically been associated with resistance to treatment in a number of cancers 18 , 19 ., Therefore , a critical clinical concern lies in the design of therapeutic strategies that would circumvent resistance to apoptosis of cells with deregulated Src activity ., To this end , several classes of therapeutic agents might be a priori considered ., Inhibitors of Src tyrosine kinases , such as dasatinib , are currently widely used in the clinic 20\u201323 ., Other anticancer therapeutic strategies aim at restoring apoptosis in cancer cells 24 ., In particular , inhibitors of antiapoptotic proteins such as ABT-737 or the Oblimersen Bcl-2 antisense oligodeoxyribonucleotide are currently evaluated in clinical trials 25\u201328 ., Apoptosis may also be restored by increasing the expression of pro-apoptotic proteins such as Bax , Bik or p53 29\u201332 ., Here we propose a systems biology approach for optimizing potential anticancer therapeutic strategies using parental and Src-transformed NIH 3T3 fibroblasts as a biological model ., To this end , molecular mathematical models of Bik kinetics and of the mitochondrial pathway of apoptosis were built and fitted to available experimental data ., They guided further experimental investigation in parental and Src-transformed cells which allowed their refinement ., Then , those models were used to generate predictions which were validated by subsequent specifically-designed experiments ., Finally , we theoretically explored different drug combinations involving the kinase inhibitor staurosporine , Src inhibitors , and activators or inhibitors of the Bcl-2 protein family , in order to design optimal anticancer strategies for this biological system ., Optimal strategies were defined as those which maximized the efficacy on Src-transformed cells considered as cancer cells under the constraint of toxicity remaining under a tolerable threshold in parental cells ., We recently provided evidence that Bik , a BH3-only protein , is a key regulator of apoptosis in the considered biological system 5 ., Therefore we first built a mathematical model to investigate Bik kinetics in non-apoptotic conditions ., Bik concentration temporal variations were assumed to result from two processes: protein formation and protein polyubiquitylation , which eventually leads to its degradation ., Let us denote and the intracellular concentration of Bik and polyubiquitylated Bik proteins respectively , expressed in nM ., Bik protein was assumed to be synthesized at a constant rate in both Src-transformed and parental cells as suggested by similar Bik mRNA level in both cell types 5 ., Concerning Bik ubiquitylation , we considered that it occurred either spontaneously at the rate , or after Bik phosphorylation by activated Erk1\/2 downstream of SRC activation , as demonstrated in Src-transformed fibroblasts ( 5 , Figure 1 ) ., In those cells , this prior phosphorylation increased Bik ubiquitylation rate and further proteasomal degradation ., This Src-dependent pathway was modeled by Michaelis-Menten kinetics with parameters and ., In the model , we assumed that spontaneous and Src-mediated ubiquitylation could occur in both transformed and parental cells ., Ubiquitin molecules were assumed to be in large excess compared to Bik amount ., Therefore ubiquitin concentration was considered as constant and implicitly included in and ., Poly-ubiquitylated molecules were then assumed to be degraded by the proteasome at a constant rate in both cell types ., was arbitrary set to 1 as it does not influence kinetics , and only acts on ., The model of Bik kinetics can be written as follows: ( 1 ) ( 2 ) Parameters were then estimated for parental and Src-transformed cells by fitting experimental results on Bik protein degradation in both cell types ( 5 , reprinted with permission in Figure 2B ) ., We assumed that the spontaneous phosphorylation occurred at the same rate in parental and Src-transformed fibroblasts and therefore looked for a unique ., Parameters of Src-dependent Bik degradation were denoted and in parental cells and and in Src-transformed 3T3 cells ., Inhibition of Src kinase activity by herbimycin was experimentally monitored in Src-transformed cells ( Figure 2A , reprinted with permission from 5 ) ., Herbimycin exposure achieved a decrease of 98% in phosphorylated Y416 amount ., Therefore , we modeled herbimycin exposure as a decrease of 98% in values ., See Text S1 for details on the parameter estimation procedure ., The best-fit parameter value for the spontaneous ubiquitylation was ., Src-dependent ubiquitylation was predicted to be inactive in normal fibroblasts as , which was in agreement with experimental results 5 ., On the contrary , the Src pathway was predominant in transformed cells as and which leads to The dynamical system 1\u20132 admits a unique steady state:where ., For parental cells , in which is equal to zero , steady state becomes ., Bik steady-state concentrations in parental cells was assumed to be equal to 50 nM which is in the physiological range of BH3-only protein intracellular levels 33\u201337 ., This allowed us to deduce ., We then computed Bik steady-state concentrations in Src-transformed cells which was equal to nM ., Thus , the simulated ratio of Bik concentration in Src-transformed cells over that in parental cells was equal to 0 . 18 which is similar to the experimentally-observed value quantified to 0 . 2 ( Figure 3 , Table 1 ) ., This constitutes a partial validation of the model since the data of Figure 3 was not used in Bik kinetics model design and calibration ., In the following , these steady state concentrations were used as Bik initial condition since cells were assumed to be in non-apoptotic conditions prior to the death stimulus ., We then investigated Bik kinetics in parental and Src-transformed NIH-3T3 cells in response to an apoptotic stress that consisted of a 8-hour-long exposure to staurosporine ( 2 M ) ., As demonstrated by knockdown experiments ( Figure 2b in 5 ) , Bik was required for apoptosis induction ., Bik was present in non-transformed cells with no sign of apoptosis in normal conditions , which suggested either that Bik concentration was not large enough to trigger apoptosis in these conditions , or that Bik was activated upon apoptotic stress ., The first assumption to be mathematically investigated was that Bik protein amount might increase upon staurosporine treatment as a result of the turning-off of the degradation processes , Bik synthesis rate remaining unchanged under staurosporine treatment ., Thus , if Bik ubiquitylation process is turned off in the model , only the formation term remains in equation 1 which is now the same for parental and transformed cells , with different initial conditions ., This equation can be solved analytically: ( 3 ) where stands for Bik initial concentration taken equal to Bik steady state concentration in parental and transformed fibroblasts ., We did not observe any significant apoptosis either in normal or Src-transformed cells in the first six hours of staurosporine treatment ( data not shown ) ., In non-transformed cells , setting t\\u200a=\\u200a360 min in equation 3 gave which meant that Bik concentration would only double in six hours if this hypothesis was right ., This was tested by measuring Bik protein level during staurosporine exposure in parental cells ., However , no significant increase in Bik levels upon a 6 hour-long staurosporine treatment was observed , which ruled out that the induction of apoptosis could depend on Bik accumulation ( Figure 4 A ) ., Therefore , we investigated a second hypothesis that consisted of an activation of Bik upon apoptosis induction ., Such a possibility might rely on a release of Bik from a protein complex upon apoptotic stress as observed with other BH3-only proteins such as Bad , Bim or Bmf 38 ., To investigate the likelihood of this hypothesis , we performed the immunostaining of endogenous Bik in parental NIH-3T3 cells upon staurosporine exposure ., Our data was in agreement with a relocation of Bik from its known location at the ER to the mitochondria within 2 h of treatment ( 39 , 40 , Figure 4 B ) ., This relocation might correspond to Bik release from a binding protein at the ER as previously observed 41 ., We modeled this relocation by the equations 4 and 5 in which stands for Bik protein that had been activated possibly through this relocation and represents inactive Bik molecules ., This translocation occurred at the rate ., Colocalization between Bik fluorescence and mitotracker staining showed that 4513% of Bik molecules were located at the mitochondria within 2 h of treatment which led to the estimated value ( Figure 4 B ) ., We then investigated the mitochondrial pathway of apoptosis in NIH-3T3 parental and Src-transformed cells ., We only considered the Bcl-2 members that were experimentally detected in this biological model 5 ., The only pro-apoptotic multidomain effector was Bax , whereas the multidomain antiapoptotic protein family was represented by Bcl-2 , Bcl-xL and Mcl-1 5 ., Five BH3-only proteins were present: three BH3-only activators ( i . e . able to directly bind and activate Bax ) Puma , Bim and tBid and two BH3-only sensitizers ( i . e . able to bind Bcl-2 and related apoptosis inhibitors , but unable to bind and activate Bax ) Bad and Bik ., The respective role of present BH3-only proteins in apoptosis induction was assessed by a shRNA-mediated approach ., Bim , which was expressed at very low level , could be neglected in the onset of apoptosis , since its downregulation induced no significant increase in apoptosis resistance upon staurosporine , thapsigargin or etoposide ., In contrast , PUMA had a prominent role for apoptosis induced by genotoxic stresses ( UV or etoposide ) but displayed no significant role in staurosporine- and thapsigargin-induced apoptosis 5 ., As we focused here on staurosporine-induced apoptosis , the only BH3-only activator that we considered was tBid ., Concerning BH3-only sensitizers , Bad could be neglected as its silencing by shRNA did not significantly modify cell response to staurosporine ., Therefore the only sensitizer to be considered was Bik ., Bax , Bik and tBid were described to bind all the antiapoptotic proteins expressed in our biological model , namely Bcl2 , Bcl-xL and Mcl-1 ., Thus , for the sake of simplicity , we denoted by the cumulative concentration of those three antiapoptotic proteins ., We then modeled Bax activation ., In non-apoptotic conditions , Bax spontaneously adopts a closed 3D-conformation that does not bind antiapoptotic proteins 10 ., This conformation was denoted ., During apoptosis , Bax transforms into an opened 3D-conformation ( ) and inserts strongly into the MOM ., molecules can be inhibited by antiapoptotic proteins which trap them into dimers ., Moreover , they may spontaneously transform back into their closed conformation 42 ., If they are not inhibited , molecules may oligomerize into molecules and create pores in the MOM which correlates with the release into the cytosol of apoptogenic factors , including cytochrome C 8\u201310 ., We considered that was inefficient at binding oligomerized Bax 7 ., In the model , Bax oligomerization happens either by the oligomerization of two molecules or by a much faster autocatalytic process in which a molecule recruits a molecule to create two molecules ., Those two processes occurred at the respective rates and which were chosen such that to account for the preponderance of the autocatalytic pathway ., Bax activation from into isoforms was assumed to be catalyzed by the BH3-only activator ., We assumed that this reaction occurred in a \u201ckiss and run\u201d manner and therefore follows Michaelis-Menten kinetics ., resulted from activation by truncation which occurred at the rate 43 ., BH3-only activator can also be inhibited by which trap it into complexes ., Those complexes may be dissociated by active Bik molecules which bind to and release 13 ., Finally , we also considered that antiapoptotic proteins directly inhibit active Bik molecules and associate into complexes ., Above-mentioned chemical reactions that occur spontaneously were assumed to follow the law of mass action ., All protein concentrations are expressed in nM in the mathematical model ., This mathematical model is recapitulated in Figure 5 and Table 2 ., It can be written as follows: ( 4 ) ( 5 ) ( 6 ) ( 7 ) ( 8 ) ( 9 ) ( 10 ) ( 11 ) ( 12 ) ( 13 ) ( 14 ) Bik total protein amount was assumed to be constant during apoptosis as experimentally demonstrated ( Figure 4 A ) ., We also assumed that , and total amounts remained constant following the death stimulus ., However , the apoptotic stress may induce Bax transcription and repress Bcl2 one , in particular through the activation of p53 44 ., Four conservation laws hold: Only seven from the eleven equations of the mathematical model 4\u201314 need to be solved as the four remaining variables can be computed using those conservation laws ., We subsequently modeled the cell population behavior ., Let us denote by the percentage of surviving cells at time t ., No cell division was assumed to occur in presence of staurosporine as the very first effect of most cytotoxic drug consists in stopping the cell cycle 45 ., Natural cell death was neglected as almost no apoptosis was observed in either parental or Src-transformed cells in the absence of death stimuli 5 ., We considered that apoptosis is irreversibly activated when concentration reaches the threshold which corresponds to the minimal amount of oligomerized Bax molecules required to trigger the cytochrome C release into the cytosol ., This assumption was modeled in equation 15 by a S-shape function which also ensures that the death rate does not grow to infinity ., Below is the equation for the percentage of surviving cells: ( 15 ) Parameters , a and were assumed to be the same for the two cell populations ., At the initial time just before the apoptotic stress , cells were assumed to be in steady state conditions ., The initial percentage of surviving cells is ., Bik initial concentrations were set to steady-state values computed using equations 1\u20132 ., Moreover , we assumed that Bik was entirely under its inactive form so that: , , ., All Bax molecules are assumed to be inactive: , and ., All existing molecules are trapped in complexes with : and ., Initial protein concentration of Bid and Bcl2 can be computed using the conservation laws: and ., For the sake of simplicity , we considered that no complexes were present at the initial time ( ) as dimers do not play any part in the overall dynamics since we assumed that they do not dissociate ., As previously stated , the considered apoptotic stress consists of an 8-hour-long exposure to staurosporine ( 2 M ) which starts at time t\\u200a=\\u200a0 ., It triggers two molecular events: activation into and formation representing truncation into ., Mathematically , and are set to non-zero values at the initial time ., Parameters of this model of mitochondrial apoptosis were estimated by fitting experimental data in parental and Src-transformed cells from 5 and integrating biological results from literature ., First , we assessed quantitative molar values of considered Bcl-2 family proteins in non-apoptotic conditions as follows ., We set Bax total concentration in Src-transformed cells to 100 nM according to 46 in which the authors stated that this was a physiological level in tumor cells ., This value is also in agreement with concentration ranges found in the literature 33 , 34 , 36 , 47 , 48 ., Then , in 46 , they found that anti-apoptotic total concentration had to be 6 times higher than that of Bax in order to prevent apoptosis ., Therefore , we set in Src-transformed cells ., Concerning Bid total concentration , we assumed which is in agreement with experimental results from the literature 33 , 34 , 36 , 47 ., Finally , tBid initial concentration was set to 1 nM since this band was hardly detectable by western-blot ( Figure 3 ) ., Moreover , this value was consistent with previous modeling results 47 ., We then computed protein ratios between parental and Src-transformed cells using immunoblotting data of Figure 3 ., We experimentally determined that there was a 9-fold higher amount of proteins in the cytosol fraction compared to the mitochondria compartment which allowed us to compute protein ratios of total intracellular quantities ( Table 1 ) ., As previously stated , Bik protein amount was reduced in Src-transformed cells by a factor of 0 . 2 compared to parental fibroblasts ( Figure 3 ) ., This dramatic decrease was the result of the Src-dependent activation of Erk1\/2 kinases , leading to Bik phosphorylation , polyubiquitylation and subsequent degradation by the proteasome 5 ., Bax steady-state level in non-apoptotic conditions was increased by a factor of 2 . 1 in Src-transformed cells compared to normal ones and that of Bid was decreased by a factor of 0 . 77 ., Concerning antiapoptotic molecules , the sum of Bcl2 , Bcl-xL and Mcl-1 quantities was slightly increased in Src-transformed cells by a factor of 1 . 1 compared to parental ones ., Those protein ratios were used to compute molar quantities of considered Bcl-2 family protein total concentrations ( Table 1 ) ., Then , we estimated the apoptotic threshold as follows ., Quantification of Figure S1d in 5 showed that 38% of BAX molecules at the mitochondria were activated during apoptosis ., Previously-described quantification of Figure 3 showed that 33% of BAX total amount were located at the mitochondria , the remaining part being in the cytosol ., Therefore , the percentage of activated BAX was set to 33% * 38%\/10013% ., This percentage is in agreement with previous experimental data which suggests that approximately 10\u201320% of Bax total amount is actually activated during apoptosis 46 ., The high intensity of the bands corresponding to Bcl-xL expression in Figure 3 suggested that it might be the predominant antiapoptotic protein in our biological model ., Dissociation constant between Bcl-xL and respectively Bik , Bid and Bax was experimentally found to be equal to nM , nM and nM 49 , 50 ., Therefore , we set and and only estimated ., At this point , 10 kinetic parameters still needed to be estimated ., In order to determine those 10 parameters , we fitted experimental data from Figure 6 under constraints inferred from experimental results ., We used the three experimental data points of Figure 6 corresponding to exposure to staurosporine as a single agent or combined with herbimycin ., We modeled the administration of staurosporine after an exposure to the Src tyrosine kinase inhibitor herbimycin as follows ., We assumed that herbimycin was administrated before staurosporine exposure such that the system had time to reach steady state ., As previously described , herbimycin exposure was modeled by decreasing ( the maximal velocity of Src-induced Bik ubiquitylation ) of 98% of its original value ., Then , we set constraints on state variables as follows ., We assumed that did not decrease below 20% ( i . e . the apoptotic threshold ) of its initial value within 6 h of staurosporine exposure as approximately 20% of Bax total quantity is activated during apoptosis 9 , 46 ., Moreover , we ensured that reached the apoptotic threshold in parental cells after 6 to 8 h of exposure to staurosporine as biological experiments showed ., Moreover , as Bax oligomerization was assumed to be an autocatalytic process , we expected to obtain ., Therefore , in the parameter estimation procedure , we set initial search values for and such that ., Finally , molecule association rates were searched between and which is a realistic range with respect to the diffusion limit 48 ., Estimated parameter values are shown in Table 2 ., The data-fitted mathematical model allowed the investigation of the dynamical molecular response to staurosporine exposure ( Figure 7 ) ., As expected , higher Bik concentration in normal fibroblasts led to a higher concentration of and of free compared to transformed cells ., molecules then activated into which oligomerized until reaching the apoptotic threshold in parental cells ., On the contrary , could efficiently be sequestered in complexes with antiapoptotic proteins in Src-transformed cells as a result of the lower level of Bik protein ., This perfectly fit the described function of Bik as a sensitizer 51 ., Concerning co-administration of staurosporine and herbymicin , the model predicted that this drug combination circumvents the resistance of the cancer cell population in which 99% of cells are apoptotic after 8 hours of exposure to staurosporine ( Figure 6 ) ., This model behavior was in agreement with experimental data which showed 98% of apoptotic cells in the Src-transformed population ., Moreover , the model predicted that an exposure to staurosporine as a single agent or combined with herbimycin lead to the same activity of 80% of apoptotic cells in the parental fibroblasts population ., We intended to determine optimal therapeutic strategies for our particular biological system in which parental and Src-transformed NIH-3T3 fibroblasts stand for healthy and cancer cells respectively ., In the following , both cell populations are exposed simultaneously to the same drugs , mimicking the in vivo situation in which healthy and tumor tissues are a priori exposed to the same blood concentrations of chemotherapy agents ., From a numerical point of view , identical parameter changes were applied to normal and cancer cells ., First , we investigated the combination of staurosporine with ABT-737 , a competitive inhibitor of Bcl-2 and Bcl-xL that were the main antiapoptotic proteins in our cellular model ., ABT-737 inhibits free antiapoptotic proteins but also dissociates complexes of anti- and pro-apoptotic proteins ., As for herbimycin , we assumed that ABT-737 was administrated before staurosporine such that the system had time to reach steady state ., ABT-737 pre-incubation was thus modeled by decreasing Bcl-2 total amount in proportion to ABT-737 concentration and by setting and at the initial time ., Interestingly , ABT-737 exposure in the absence of staurosporine ( i . e . ) did not result in cell death induction for any dose of ABT-737 in the mathematical model , as experimentally demonstrated 5 ., Indeed , in the absence of staurosporine , Bid was not activated into tBid and the low quantity of tBid present in the cells at steady state was not sufficient to trigger Bax oligomerization , even when ABT-737 inhibited all anti-apoptotic proteins ., This confirmed that the mathematical model described correctly this cell model that does not behave as a \u201cprimed for death model\u201d in which inhibition of anti-death proteins results in death , even in the absence of apoptosis induction ., As a reminder , in the primed for death situation , incubation with ABT-737 led to cell death as a consequence of the release of the BH3-only pro-apoptotic proteins that were therefore able to activate Bax ., The main difference between the primed for death situation and our model is that apoptosis resistance in the primed for death model primary comes from the overexpression of anti-apoptotic proteins such as Bcl-xL or Bcl2 that are efficiently inhibited by ABT-737 whereas here it comes from the decrease of a pro-death protein in the Src-transformed model ., The combination of staurosporine and ABT-737 at any concentration , i . e . for any decrease in Bcl2 total protein amount , was predicted by the model to induce much more apoptosis in parental cells compared to Src-transformed cells and thus to fail in circumventing cancer cells resistance ( Figure 8 ) ., To experimentally confirm this model prediction , we pre-incubated parental and Src-transformed cells with ABT-737 prior to staurosporine exposure ., The resulting death-inducing effect on Src-transformed cells was significantly increased compared to staurosporine alone ( Figure 6 ) ., However , as anticipated by the model , this drug combination resulted in an extremely high toxicity of 99% of apoptotic cells in the parental fibroblasts population ( Figure 6 ) ., Those data points were reproduced by the calibrated mathematical model for a predictive decrease of 182 nM in Bcl2 total concentrations in both cell types ., After that , we looked for theoretically optimal therapeutic strategies by applying optimization procedures on the calibrated model of the mitochondrial apoptosis ., Optimal strategies were defined as those which maximized efficacy in cancer cells under the toxicity constraint that less than 1% of healthy cells die during drug exposure ., We investigated drug combinations that consisted of the exposure to staurosporine after treatment with Src inhibitors , or up- or down-regulators of BCL-2 family proteins ., Pre-incubation with inhibitors or activators aimed at modifying the equilibrium of the biological system before exposure to the cytotoxic drug ., Src inhibition was simulated by a decrease in value whereas up- or down-regulation of Bcl-2 family proteins were modeled by modifying the total concentration of the targeted proteins ., The theoretically-optimal drug combination would consist of administering staurosporine combined with inhibitors of Src , Bax and Bcl2 , together with a upregulator ., The concentration of Bax inhibitor should be set such that Bax total concentration decreases below the apoptotic threshold in healthy cells thus protecting them from apoptosis ., As Bax total amount was higher in cancer cells , it would remain high enough to allow these cells to undergo apoptosis ., Once healthy cells are sheltered from apoptosis , Bcl2 amount could be decreased , using for instance ABT-737 , and amount increased at the same time without risking any severe toxicity ., As expected , the optimal therapeutic strategy also included the suppression of the Src-dependent phosphorylation of Bik in cancer cells , using for instance herbimycin ., This drug combination led to 99% of apoptotic cells in the cancer cell population and less than 1% in the parental one where Bax was hardly present ( Figure 9 , Text S1 ) ., This theoretically optimal strategy involved the administration of a cytotoxic agent combined with four other chemicals , which may not be realistic in the perspective of clinical application ., Therefore we hierarchically ranked the considered therapeutic agents by searching for optimal strategies consisting in the combination of staurosporine with only one or two agents ., Strategies which satisfied the tolerability constraint ( i . e . less than 1% of apoptotic parental cells ) and reached an efficacy value of 99% of apoptotic cells all involved Bax downregulation in addition to a second agent among Bcl2 downregulator , upregulator and Src inhibitor ( See Text S1 for more details ) ., Of note , isolated decrease of Bax total amount fulfilled the tolerability constraint but resulted in less than 1% of apoptotic cancer cells ., Finally , we experimentally validated feasibility of this counterintuitive theoretical strategy ., We selected two siRNAs that fully downregulated Bax in parental cells but not in Src-transformed ones ( Figure 10 A ) ., Bax knockdown protected parental cells from treatment by staurosporine and ABT737 or staurosporine and herbimycin but not Src-transformed cells ( Figure 10 B ) ., Therefore by downregulating Bax in our biological model , we were capable of selectively killing Src-transformed cells ., A combined mathematical and experimental approach was undertaken to study the mitochondrial pathway of apoptosis in parental and Src-transformed NIH-3T3 cells ., First , a mathematical model for Bik kinetics in normal and apoptotic conditions was built ., It took into account Bik ubiquitylation and further proteasomal degradation that Src-dependent Bik phosphorylation stimulated in Src-transformed cells ., Then , we designed a mathematical model of the mitochondrial pathway of apoptosis which only involved the proteins that participated in apoptosis induction in the studied biological model ., Interestingly , this mathematical model was quite simple , with only one effector , Bax , two BH3-only proteins , Bik ( a sensitizer ) and tBid ( a direct Bax activator ) , and a pool of antiapoptotic proteins which were all described as behaving identically toward Bax , Bik and tBid 38 ., Several published works propose mathematical modeling of apoptosis ., Some of them model all pathways to apoptosis from the death stimulus to the actual cell death 52\u201354 , other focus on the caspase cascade leading to apoptosis 55 ., Molecular modeling of the mitochondrial pathway was achieved in several works 47 , 48 , 56\u201359 ., Those models being conceived to address other biological issues , we had to build a new mathematical model that was tailored to our particular problematic and aimed at optimizing anticancer therapies in the specific case of Src transformation ., Exploring Bik kinetics upon apoptosis induction led to the interesting prediction that the inhibition of Bik degradation might not allow its accumulation above a threshold that would induce apoptosis in the experimentally-demonstrated time range ., This was validated by immunoblotting that established that Bik concentration was not changed upon apoptosis induction by staurosporine ., Therefore , we looked for another explanation that might support these observat","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Src tyrosine kinases are deregulated in numerous cancers and may favor tumorigenesis and tumor progression ., We previously described that Src activation in NIH-3T3 mouse fibroblasts promoted cell resistance to apoptosis ., Indeed , Src was found to accelerate the degradation of the pro-apoptotic BH3-only protein Bik and compromised Bax activation as well as subsequent mitochondrial outer membrane permeabilization ., The present study undertook a systems biomedicine approach to design optimal anticancer therapeutic strategies using Src-transformed and parental fibroblasts as a biological model ., First , a mathematical model of Bik kinetics was designed and fitted to biological data ., It guided further experimental investigation that showed that Bik total amount remained constant during staurosporine exposure , and suggested that Bik protein might undergo activation to induce apoptosis ., Then , a mathematical model of the mitochondrial pathway of apoptosis was designed and fitted to experimental results ., It showed that Src inhibitors could circumvent resistance to apoptosis in Src-transformed cells but gave no specific advantage to parental cells ., In addition , it predicted that inhibitors of Bcl-2 antiapoptotic proteins such as ABT-737 should not be used in this biological system in which apoptosis resistance relied on the deficiency of an apoptosis accelerator but not on the overexpression of an apoptosis inhibitor , which was experimentally verified ., Finally , we designed theoretically optimal therapeutic strategies using the data-calibrated model ., All of them relied on the observed Bax overexpression in Src-transformed cells compared to parental fibroblasts ., Indeed , they all involved Bax downregulation such that Bax levels would still be high enough to induce apoptosis in Src-transformed cells but not in parental ones ., Efficacy of this counterintuitive therapeutic strategy was further experimentally validated ., Thus , the use of Bax inhibitors might be an unexpected way to specifically target cancer cells with deregulated Src tyrosine kinase activity .","summary":"Personalizing medicine on a molecular basis has proven its clinical benefits ., The molecular study of the patients tumor and healthy tissues allowed the identification of determinant mutations and the subsequent optimization of healthy and cancer cells specific response to treatments ., Here , we propose a combined mathematical and experimental approach for the design of optimal therapeutics strategies tailored to the patient molecular profile ., As an in vitro proof of concept , we used parental and Src-transformed NIH-3T3 fibroblasts as a biological model ., Experimental study at a molecular level of those two cell populations demonstrated differences in the gene expression of key-controllers of the mitochondrial pathway of apoptosis thus suggesting potential therapeutic targets ., Molecular mathematical models were built and fitted to existing experimental data ., They guided further experimental investigation of the kinetics of the mitochondrial pathway of apoptosis which allowed their refinement ., Finally , optimization procedures were applied to those data-calibrated models to determine theoretically optimal therapeutic strategies that would maximize the anticancer efficacy on Src-transformed cells under the constraint of a maximal allowed toxicity on parental cells .","keywords":"oncology, systems biology, cell death, medicine, mathematics, theoretical biology, applied mathematics, cancer treatment, chemotherapy and drug treatment, signaling networks, biology, molecular cell biology, computational biology","toc":null} +{"Unnamed: 0":0,"id":"journal.pcbi.1000510","year":2009,"title":"Predicting the Evolution of Sex on Complex Fitness Landscapes","sections":"Sexual reproduction is widespread among multi-cellular organisms 1 ., However , the ubiquity of sex in the natural world is in stark contrast to its perceived costs , such as the recombination load 2 or the two-fold cost of producing males 3 , 4 ., Given these disadvantages it is puzzling that sexual reproduction has evolved and is maintained so commonly in nature ., The \u201cparadox of sex\u201d has been one of the central questions in evolutionary biology and a large number of theories have been proposed to explain the evolution and maintenance of sexual reproduction 5 ., Currently , the most prominent theories include, ( i ) the Hill-Robertson effect 6\u20138 ,, ( ii ) Mullers ratchet 9 ,, ( iii ) the Red Queen hypothesis 10 , 11 , and, ( iv ) the Mutational Deterministic hypothesis 12 , 13 ., While originally described in various different ways , the underlying benefit of sex can always be related to the role of recombination in breaking up detrimental statistical associations between alleles at different loci in the genome ., What fundamentally differentiates the theories is the proposed cause of these statistical associations , assigned to either the interactions between drift and selection ( Fisher-Muller effect , Mullers ratchet , and Hill-Robertson effect ) or gene interactions and epistatic effects ( Red Queen hypothesis and Mutational Deterministic hypothesis ) ., The present list of hypotheses is certainly not exhaustive , with new ones continuously being proposed , complementing or replacing the existing ones 14 ., However , it is not new hypotheses that are most needed , but the real-world evidence that allows us to distinguish between them ., The major question that still remains is whether the assumptions and requirements of different theories are fulfilled in the natural world ., Accordingly , there has been considerable effort to experimentally test these assumptions , mainly for the epitasis-based theories ( reviewed in 15\u201317 ) ., However , an even more basic and crucial problem underlies all work on evolution of sex: how does one choose , measure , and interpret appropriate population properties that relate to different theories 17\u201319 ., The difficulty stems from the often large divide between the theoretical and experimental research: theories are frequently formulated as mathematical models and rely on simplistic fitness landscapes or small genome size ( e . g . two locus , two allele models ) 13 , 20\u201325 ., As a result , it may be unclear how a property established based on these simplified assumptions relates to actual properties of natural populations ., In this study we attempt to bridge the gap between the theoretical and experimental work and to identify which population measures are predictive of the evolution of sexual reproduction by simulating the evolution of both sexual and asexual populations on fitness landscapes with different degrees of complexity and epistasis ., The measures we use are the change of mean fitness , of additive genetic variance , or of variance in Hamming distance as well as four epistasis-based measures , physiological , population , mean pairwise , and weighted mean pairwise epistasis ., While this certainly is not an exhaustive list , we took care to include major quantities previously considered in theoretical and experimental literature ( e . g . 26\u201328 ) ., With some exceptions 29\u201332 , earlier work generally focused on the smooth , single peaked landscapes , while here we also use random landscapes and NK landscapes ( random landscapes with tunable ruggedness ) ., Some studies of more complex rugged landscapes tested whether they would select for sex but have not found a simple and unique answer , even in models with only two-dimensional epistasis 33 , 34 ., A recent paper , which uniquely combines experimental and theoretical approaches and simulates evolution of sex on empirical landscapes , also finds that landscape properties greatly affect the outcome of evolution , sometimes selecting for but more often against sex 35 ., However , what specifically distinguishes our study is the goal of not only determining when sex evolves but also of quantifying our ability to detect and predict such outcome in scenarios where we know how the evolution proceeds ., Whether the more complex landscapes we are using here are indeed also more biologically realistic is open to debate as currently little is known about the shape and the properties of real fitness landscapes ( for an exception see for example 35 , 36 ) ., Our goal is to move the research focus away from the simple landscapes mostly investigated so far to landscapes with various higher degrees of complexity and epistasis , and to probe our general understanding of the evolution of sexual reproduction on more complex fitness landscapes ., Notably , we find that some of the measures routinely used in the evolution of sex literature perform poorly at predicting whether sex evolves on complex landscapes ., Moreover , we find that genetic neutrality lowers the predictive power of those measures that are typically robust across different landscapes types , but not of those measures that perform well only on simple landscapes ., The difficulty of predicting sex even under the ideal conditions of computer simulations , where in principle any detail of a population can be measured with perfect accuracy , may be somewhat sobering for experimentalists working on the evolution of sex ., We hope , however , that this study will evoke interest among theoreticians to tackle the challenge and develop more reliable predictors of sex that experimentalists can use to study the evolution of sex in natural populations ., We investigated the evolution of sex in simulations on three types of fitness landscapes with varying complexity ( smooth , random and NK landscapes ) and used seven population genetic quantities ( \u0394VarHD , \u0394Varadd , \u0394Meanfit , Ephys , Epop , EMP , and EWP , Table, 1 ) as predictors of change in frequency of the recombination allele ( see Methods for more details ) ., We calculated predictor accuracy ( the sum of true positives and true negatives divided by the total number of tests ) and used it to assess their quality on 110 smooth landscapes with varying selection coefficients and epistasis , 100 random landscapes , and 100 NK landscapes each for K\\u200a=\\u200a0 , \u2026 , 5 ., All landscapes are based on 6 biallelic loci and they were generated such that an equal number of landscapes of each type select for versus against sex in deterministic simulations with infinite population size ., Hence , random prediction by coin flipping is expected to have an accuracy of 0 . 5 ., Figure 1 shows the accuracy of the predictors for the different landscape types ., Increasing levels of blue indicate greater accuracy of prediction ., For the simulations with infinite population size ( deterministic simulations ) we ran a single competition between sexual and asexual populations to assess whether sex was selected for ., For simulations with finite population size ( stochastic simulations ) , we ran 100 simulations of the competition phase and assessed whether the predictor accurately predicts the evolution of sex in the majority of these simulations ., Focusing on the top left panel we find that for deterministic simulations most predictors are only highly accurate in predicting evolutionary outcomes for the smooth landscapes ., The exception is the poor performance of \u0394Meanfit , which is not surprising , as theory has shown that for populations in mutation-selection balance \u0394Meanfit is typically negative 2 ., According to our use of \u0394Meanfit as a predictor , it always predicts no selection for sex when negative and thus is correct in 50% of cases , due to the way the landscapes were constructed ., For the NK0 landscapes , all predictors perform poorly , because such NK landscapes have no epistasis by definition ( see Methods ) ., For infinite population size , theory has established that in absence of epistasis there is no selection for or against sex ., Indeed , in our simulations the increase or decrease in the frequency of sexual individuals is generally so small ( of order 10\u221215 and smaller ) that any change in frequency can be attributed to issues of numerical precision ., Generally , the accuracy of most predictors is much weaker for complex landscapes ( NK and random landscapes ) than for the simpler , smooth landscapes ., The predictors that have highest accuracy across different landscape types are \u0394VarHD and Epop ., To test whether combinations of the predictors could increase the accuracy of prediction of the evolution of sex we plot for each landscape the value of the predictors \u0394VarHD , \u0394Varadd and \u0394Meanfit against each other and color code whether the number of sexual individuals increased ( red ) or decreased ( blue ) during deterministic competition phase ( see Figure 2 ) ., If the blue and red points are best separated by a vertical or a horizontal line , then we conclude that little can be gained by combining two predictors ., If , however , the points can be separated by a different linear ( or more complex ) function of the two predictors , then combining these predictors would indeed lead to an improved prediction ., Figure 2 shows the corresponding plots for the smooth , the random , and the NK2 landscapes ., For the smooth landscapes the criterion \u0394VarHD>0 or \u0394Varadd>0 are both equally good in separating cases where sex evolved from those where it did not ., As already shown in Figure 1 , \u0394VarHD is generally a more reliable predictor of the evolution of sex than \u0394Varadd in the more complex random or NK landscapes ., Epistasis-based theories suggest that the selection for sex is related to a detrimental short-term effect ( reduction in mean fitness ) and a possibly beneficial long-term effect ( increase in additive genetic variance ) 28 ., The plots of \u0394Varadd against \u0394Meanfit , however , do not indicate that combining them would allow a more reliable prediction of the evolution of sex ., Generally , the plots show that blue and red points either tend to overlap ( in the more complex landscapes ) or can be well separated using horizontal or vertical lines ( in the smooth landscapes ) such that combining predictors will not allow to substantially increase the accuracy of prediction ., This is also the case for all other landscapes and all other pairwise combinations of predictors ( data not shown ) ., It is possible that some of the effect described in 28 and expected here are too small to be detected with the level of replication in our study ., However , as the level of replication used in this computational study goes way beyond what can be realistically achieved in experimental settings we expect that these effects would also not be detected in experimental studies ., We also used a linear and quadratic discriminant analysis to construct functions to predict the outcome of competitions between the two modes of reproduction ., For these purposes , half of the data set was used for training and the other half for testing of the discriminant functions , and the procedure was repeated separately for each of the three population sizes ( 1 , 000 , 10 , 000 , and 100 , 000 ) and the deterministic case ., In no case did these methods improve the accuracy of predictions ( data not shown ) ., While there certainly are other , potentially more sophisticated techniques that could be used here , our analysis indicates that there may not be much additional information in our metrics that could be extracted and used to increase the accuracy of the predictions ., All predictors performed much worse for simulations with finite population size ( Figure 1 ) , most likely because the selection coefficient for sex is weak 19 , 20 ., To further examine the effect of finite population size on the evolution of sex on different landscape types we analyzed 100 independent simulations of the competition phase starting from the genotype frequencies obtained from the burn-in phase on each landscape ., Figure 3 shows the fraction of cases in which the frequency of sexual individuals increased for three population sizes ( 1 , 000 , 10 , 000 , and 100 , 000 ) , plotted separately for those landscapes in which frequency of the recombination modifier increased or decreased in deterministic simulations ., For almost all landscapes the fraction of cases in which sex evolves is close to 50% , indicating that selection for sexual reproduction is indeed extremely weak , and can thus easily be overwhelmed by stochastic effects ( in contrast to simulations with infinite populations where selection coefficients of any size will always produce a consistent observable effect ) ., As a consequence , even for relatively large population sizes the outcome of the competition between sexual and asexual populations is largely determined by drift ., Such weak selection may in part due to the small number of loci used for these simulations and stochastic simulations with larger genomes have indeed been shown to result in stronger selection for or against sex 37 , 38 ., However , accurate deterministic simulations are computationally not feasible for large genome sizes , because of the need to account for the frequency of all possible genotypes in deterministic simulations ( see Supporting Information ( Text S1 ) for more details ) ., According to the Hill-Robertson effect ( HRE ) 8 , 21 selection for recombination or sex may be stronger in populations of limited size , because in such populations the interplay between drift and selection can generate negative linkage disequilibria , which in turn select for increased sexual reproduction ., The strength of HRE vanishes for very small populations and for populations of infinite size 21 ., In an intermediate range of population sizes , the HRE increases with increasing number of loci ( as does the range of population sizes in which the effect can be observed ) 38 and for large genome size it can be strong enough to override the effect of weak epistasis 37 ., In our simulations , however , HRE is weak , as is evidenced by the fact that , in the NK0 landscape , which by definition have no epistasis , the fraction of runs in which sex evolves is only very marginally above 50% ( Figure 3 ) ., Our results indicate that for finite population size the predictors generally perform poorly ., Of course this does not imply that they could not be better than a simple coin toss ., However , the results suggest that these predictors will likely be of limited use , as any experiment will have difficulties to reach even the replicate number that we have used to generate Figure 1 ., We also examined additional fitness landscapes , characterized by increased neutrality ( for full details and figures see Text S1 ) ., We found that the allelic diversity at neutral loci both decreases the accuracy and generates a systematic bias in the previously best performing predictors , Epop and \u0394VarHD ., In contrast , other predictors investigated here , \u0394Varadd , \u0394Meanfit , Ephys , EMP , and EWP are not affected by including neutral loci , but still have poor accuracy of prediction on more complex fitness landscapes ., The central message of our study is that the prediction of the evolution of sex is difficult for complex fitness landscapes , even in the idealized world of computer simulations where in principle one can measure any detail of a given population and fitness landscape ., Here we put the emphasis on predictors that are experimentally measurable and are based on conditions for the evolution of sex established in the population genetic literature using simple fitness landscapes ., We have however included EMP and EWP , two predictors which would be more difficult to measure experimentally , but are based on the most fundamental and general theoretical treatment of the evolution of sex 28 ., Of course , while our choice of predictors , landscapes and selection regimes is comprehensive , we are aware that it can never be exhaustive or complete \u2013 there will always be other options to try out and test ., Future work will have to focus on identifying more reliable predictors of the evolution of sex that can be used in conjunction with experimental data ., Additionally , a better characterization of properties of natural fitness landscapes is badly needed to improve our understanding of the forces selecting for the evolution of sex ., As it stands , \u0394VarHD , our best candidate for a predictor of the evolution of sex , has nevertheless important shortcomings ., In particular , it never reaches high levels of accuracy on many of the landscapes ., Still , \u0394VarHD at least suggests a potential direction for future research: a focus on predictors that would take advantage of the rapidly increasing number of fully or partially sequenced genomes and allow us to determine the advantage of sex in large numbers of taxa , bringing us closer to fully understanding the evolution of sex ., All simulations of the evolution of a haploid population on a given fitness landscape are divided into a \u201cburn-in\u201d and a \u201ccompetition\u201d phase ., In the burn-in phase an asexually reproducing population is allowed to equilibrate on the landscape starting from random initial genotype frequencies ., In the competition phase we determine whether the frequency of an allele coding for increased recombination increases in the population ., The burn-in phase consists of repeated cycles of mutation and selection ., Genotype frequencies after selection are given by the product of their frequency and relative fitness before selection ., In all simulations mutations occur independently at each locus with a mutation rate \u03bc\\u200a=\\u200a0 . 01 per replication cycle ., This high mutation rate was chosen in order to obtain sufficient levels of genetic diversity ., However , we also tested mutation rates up to 10 times lower and found no qualitative differences in the results ( data not shown ) ., In the competition phase the population undergoes recombination in addition to mutation and selection in each reproduction cycle ., To this end a recombination modifier locus is added to one end of the genome , with two alleles m and M , each present in exactly half of the population ., Recombination between two genotypes depends on the modifier allele in both genotypes , with the corresponding recombination rates denoted by rmm , rmM , and rMM ., For the simulations discussed in the main text we used rmm\\u200a=\\u200armM\\u200a=\\u200a0 and rMM\\u200a=\\u200a0 . 1 ., For this parameter choice individuals carrying distinct modifier alleles cannot exchange genetic material and thus any effect of increased recombination remains linked to the M allele ., Sexual and asexual individuals compete directly with each other , and we refer to this scenario as the evolution of sex ., In contrast , if rmm3-fold faster in activated rabbit psoas fibers than it does in the same fibers when they are inactive 7 ., One possible explanation for this effect is that the properties of molecules within each half-sarcomere change when a muscle is stretched while it is activated ., For example , titin filaments could become stiffer , or the cross-bridge populations could fail to reach steady-state during a prolonged movement ., Both of these effects could potentially reflect force-dependent protein-protein interactions 8 ., A second possible explanation is that the half-sarcomeres continue to operate as they did before the stretch and that the measured experimental behavior is an emergent property of a collection of heterogeneous half-sarcomeres ., These explanations are not mutually exclusive so it is also possible that both effects contribute to the activation dependence of the latter stages of the stretch response ., An argument against variable titin properties being the sole explanation is that the magnitude of the Ca2+-dependent stiffening required to explain the behavior observed in psoas fibers ( \u223c300% increase in titin stiffness ) is much larger than that ( \u223c30% increase in stiffness ) observed in experiments that have specifically investigated titins Ca2+-sensitivity 9 ., The idea that the activation-dependence of the latter stages of the stretch response could reflect emergent behavior of a collection of half-sarcomeres might be inferred from a number of previous reports 10\u201312 but it does not seem to have been explicitly stated or analyzed in quantitative detail before ., This paper presents a mathematical model that was developed to investigate the potential emergence of new mechanical behavior in a system composed of multiple half-sarcomeres ., Detailed computer simulations show that the model can reproduce the activation dependence of the latter stages of the stretch response without requiring that titin filaments stiffen when the Ca2+ concentration rises ., The stretch response of a fast mammalian muscle fiber may therefore be an irreducible property of the complete cell ., Fig 1 shows experimental force records for a chemically permeabilized rabbit psoas fiber subjected to a ramp lengthening followed by a ramp shortening in four different pCa solutions ., The rate at which force rose during the latter stages of the stretch increased with the level of Ca2+ activation ., Data from 5 fibers showed that the slope ( estimated by linear regression ) of the tension response during the last one-third of the stretch was 3 . 26\u00b10 . 87 ( SD ) times greater ( t-test for value greater than unity , p<0 . 001 ) in pCa ( =\\u200a\u2212log10Ca2+ ) 4 . 5 solution ( maximal Ca2+ activation ) than it was in pCa 9 . 0 solution ( minimal Ca2+ activation ) ., As discussed in the Introduction , the increased slope in the pCa 4 . 5 condition is not consistent with the expected behavior of a single population of cycling cross-bridges arranged in parallel with an elastic component that has properties that are independent of the level of activation ., Computer simulations were performed to test the hypothesis that the activation dependence of the latter stages of the force response may be an emergent property of a collection of half-sarcomeres ., The model is summarized in Fig 2 and explained in detail in Materials and Methods ., Parameters defining the passive mechanical properties of the half-sarcomeres ( Table 1 , Column, 3 ) were determined by fitting Eq 8 to an experimental record measured in pCa 9 . 0 solution ., Multidimensional optimization procedures were then used to adjust the other parameters defining the models behavior in an attempt to fit the simulated force response to the experimental record measured in pCa 4 . 5 solution ., The best-fitting force response obtained in this manner is shown in red in the top panel in Fig 3 ., The corresponding model parameters are listed in Table 2 ( Column 3 ) ., The blue lines in the top panel in Fig 3 show the force responses produced by a single half-sarcomere framework with the same model parameters ., The simulated force records for the single and multi-half-sarcomere frameworks are the same for the pCa 9 . 0 condition ( where there are no attached cross-bridges ) but different for the pCa 4 . 5 condition ., Note in particular that the multi-half-sarcomere framework predicts a smaller short-range force response and a tension that rises more steeply during the latter stages of the stretch ., This progressively increasing tension is not a property of a single activated half-sarcomere in these simulations and therefore reflects interactions that occur between half-sarcomeres; it is an emergent property of the multi-half-sarcomere framework ., The red lines in the bottom panel in Fig 3 show the length traces for the 300 half-sarcomeres in the larger framework superposed ., ( The traces are shown in more detail in Supporting Information Figure S1 . ), Although individual half-sarcomeres followed length trajectories defined by Eq 2 the behavior of the overall system is chaotic ., During the stretch , for example , some half-sarcomeres are lengthening , some are shortening , and some remain nearly isometric ., The behavior of each pair of half-sarcomeres on the other hand is more orderly ., Indeed , at any given time-point in the simulation , all the full sarcomeres had virtually the same length ., This is because the inter-myofibrillar links ( Fig, 2 ) were sufficiently stiff to keep the Z-disks in register during the activation ., The effect is demonstrated in Fig 4B where the computer-rendered striation patterns show that the Z-disks ( drawn in magenta ) are always aligned whereas the M-lines ( drawn in yellow ) are frequently displaced from the middle of the sarcomere ., Z-disk alignment is no longer maintained in the simulations if the inter-myofibrillar links are ablated in silico by setting kim equal to zero ( Fig 4C ) ., In this situation , mean sarcomere length averaged perpendicular to the filaments for the different half-sarcomere pairs ( green lines in Fig 4A ) is no longer constant although mean sarcomere length averaged parallel to the filaments is always the same in the different myofibrils ., ( This has to be the case because all the myofibrils have the same length and contain the same number of sarcomeres . ), A movie showing how the computer-generated striation patterns change during the length perturbations is provided as Supporting Information Video S1 ., Interestingly , the predicted isometric force value is lower for the simulations with kim equal to zero ., The area under an xy-plot of force against length during the stretch ( not shown ) is also lower indicating that the framework simulated without inter-myofibrillar links would absorb less energy during an eccentric contraction ., This mimics experimental results obtained by Sam et al . 13 using muscles from desmin-null mice ., Fig 5 shows the effects of changing the size of the model framework and the numerical value of a key model parameter ., All simulations were performed with the parameters listed in the third columns of Tables 1 and 2 except for Fig 5C where \u03b1 ( Eq 7 ) was varied as shown ., Increasing nhs ( the number of half-sarcomeres in each myofibril ) from 1 to 10 in a framework with 6 myofibrils markedly improved the fit to the experimental record ., The additional improvement gained by further increasing nhs to 50 was more modest ., When there were already 50 half-sarcomeres in each myofibril , increasing the number of myofibrils did not dramatically improve the fit during the stretch response ( Fig 5B ) but it did help to stabilize isometric force before the stretch ., This is at least partly because the presence of inter-myofibrillar links stabilized sarcomere ( but not half-sarcomere ) lengths ( Fig 4B and C ) ., The effects of varying \u03b1 to alter the amount of half-sarcomere heterogeneity in the largest framework are summarized in Fig 5C ., Note that increasing \u03b1 beyond 0 . 1 did not substantially change the fit to the experimental data and that the simulated response for the framework with 300 half-sarcomeres and \u03b1 equal to zero was not different from that of the single half-sarcomere framework with the same model parameters ., This second point demonstrates that a fiber system does not exhibit emergent properties if the half-sarcomeres of which it is composed are all identical ., This informal sensitivity analysis suggests that the activation dependence of the latter stages of the stretch response is more likely to reflect inhomogeneity between half-sarcomeres along a myofibril than inhomogeneity between different myofibrils ., This prediction is based on the computed results shown in Fig 5A and B . Increasing the number of half-sarcomeres from 1 to 50 in a framework with 6 myofibrils markedly changed the slope of the force response during the second half of the stretch ( Fig 5A ) ., In contrast , increasing the number of myofibrils in a framework with 50 half-sarcomeres ( Fig 5B ) reduced the magnitude of oscillations in the computed force records but did not substantially alter the underlying trend of the responses ., The value of the parameters defining Fpas ( Table 1 , Column, 3 ) were determined by fitting Eq 8 to force records measured for a fiber in pCa 9 . 0 solution during small dynamic stretches ( 4% muscle length ) imposed from a starting sarcomere length of \u223c2600 nm ., It is therefore possible that the calculated parameters overestimate the isometric passive tension that would have been measured if the half-sarcomeres were stretched more than 4% ., ( The passive length tension relationship was not measured in the original experiments 7 so the relevant experimental data were not available for comparison . ), To eliminate any possibility that the tension response during the latter stages of an imposed stretch is only activation-dependent in the current simulations because the titin filaments are unrealistically stiff at long lengths , additional calculations were performed with a linear passive component ., The parameters defining Fpas in this case ( Table 1 , Column, 4 ) were determined by fitting Eq 9 to the same pCa 9 . 0 force record ., Passive force calculated in this way did not reach the maximal Ca2+-activated value until the sarcomeres were stretched beyond 3500 nm ., The best-fitting force simulations deduced by multi-dimensional optimization with the linear titin component are shown in red in Fig 6A ., While the simulation of the active fiber does not match the experimental data as well as the simulations ( Fig, 3 ) performed with the non-linear titin component ( r2\\u200a=\\u200a0 . 93 as opposed to r2\\u200a=\\u200a0 . 98 ) it does reproduce the activation-dependence of the slope of the force response during the latter stages of the stretch ., Rat soleus fibers exhibit a stretch response that is qualitatively different from that produced by rabbit psoas fibers 14 ., Instead of force rising during the latter stages of the movement , force tends to peak and then fall slightly to a plateau that is maintained as long as the stretch persists ., ( A similar plateau is observed when frog tibialis anterior fibers are stretched 15 ) ., Although the shape of the response seems to imply that passive titin properties are less important in rat soleus fibers than they are in rabbit psoas fibers , Campbell & Moss 14 showed that a single half-sarcomere model produced the best-fit to the real Ca2+-activated data when the cross-bridges were arranged in parallel with a titin spring that was \u223c3 times stiffer than that measured experimentally in pCa 9 . 0 solution ., The behavior of the soleus fibers was thus very similar to that described here for psoas preparations ., This suggests that simulations performed with a multi-half-sarcomere framework might also produce a better fit to the mechanical data from soleus fibers than a model based on a single half-sarcomere ., Fig 6B shows the results of calculations performed to test this hypothesis ., Parameter values for the simulations are listed in Tables 1 and 2 ( Column 5 in both cases ) ., The predictions for the multi-half-sarcomere framework fit the experimental data well ( r2\\u200a=\\u200a0 . 97 ) and , as in the case of the simulations of psoas fiber data , predict a lower isometric force and a less prominent short-range response than the simulations performed with a single half-sarcomere framework and otherwise identical model parameters ., This work provides important new insights and introduces novel simulation techniques but the idea that the mechanical properties of a muscle fiber might be influenced by individual half-sarcomeres behaving in different ways is not new 15 , 20\u201322 ., One of the controversies in the field is whether sarcomeres \u2018pop\u2019 , that is , extend rapidly to beyond filament overlap 12 ., This behavior can be predicted from an analysis of the steady-state active and passive length tension relationships but it has not been observed in some experiments that have specifically investigated the issue in small myofibrillar preparations 23 , 24 ., Other data 25 suggest that some sarcomeres in a sub-maximally activated myofibril \u2018yield\u2019 and others \u2018resist\u2019 during a stretch ., The present simulations suggest that there are at least two mechanisms that may reduce the likelihood of ( but perhaps not entirely eliminate ) popping under normal physiological conditions ., First , attached cross-bridges in half-sarcomeres that are starting to elongate will be stretched thereby producing increased force ., If the total length of the muscle fiber is fixed , other half-sarcomeres in the same myofibril will have to shorten and force will therefore drop in these structures ., The changes in the forces produced by cross-bridges in the half-sarcomeres that moved are transient because they will dissipate as the myosin heads progress through their normal cycle ., However , while they exist , they act in such a way as to reduce the development of additional heterogeneity ., In vivo , this effect could be enough to prevent the cell from being structurally damaged before it relaxes at the end of the contraction and passive mechanical properties are able to restore the fibers prior arrangement ., Second , forces in molecules that link half-sarcomeres will help to preserve sarcomere length uniformity ., In the current simulations , some of these molecules are represented mathematically by linear springs that connect Z-disks in adjacent myofibrils ., It was particularly interesting to discover that the in silico \u2018knock-out\u2019 of inter-myofibrillar connections ( kim\\u200a=\\u200a0 , Fig 4 ) reproduced the functional effects observed in mice from desmin-null mice - lower isometric force and decreased energy absorption during imposed stretches 13 ., One of the many interesting features of the second phase of the stretch response of activated muscle fibers is that it can be quite variable ., Fig 6 , for example , shows that it is markedly different in fast and slow mammalian fibers under very similar experimental conditions ., Getz et al . 11 observed that differences can also be observed within fast fibers from rabbit psoas muscle ., Their manuscript notes that the \u201ccontinued force rise after the critical stretch was sometimes but not always present in our data\u201d ., ( It is important to note that the stretches used by Getz et al . were up to 25 times faster than the ones simulated in the present work . A slow rise in force during the latter stages of the stretch was always observed in the experiments with psoas fibers that are simulated here 7 . ), Getz et al . suggested that the variable nature of their measured responses might reflect different amounts of half-sarcomere heterogeneity in their preparations ., Their conclusion is supported by the present simulations ., Half-sarcomere heterogeneity has also been suggested as a potential explanation for residual force enhancement - the augmented force that persists long after a stretch and hold imposed during a maximal contraction 26 ., The current simulations support this hypothesis too because Edman & Tsuchiya 10 showed that the size of the enhancement correlates with the magnitude of the second phase of the force response in the stretch that produces it ., However , half-sarcomere heterogeneity may not be the only mechanism responsible for residual force enhancement because Edman & Tsuchiya 10 also showed that there could be a small residual enhancement when the conditioning stretch didnt produce a measurable second phase force response ., Precise measurements of the mechanical properties of single muscle fibers are often performed using a technique known as sarcomere length control 27 , 28 ., This is an important experimental approach but it should be made clear that the technique does not eliminate the potential emergence of new properties due to the collective behavior of half-sarcomeres ., This is because sarcomere length control dictates the mean sarcomere length in a selected region of the muscle fiber rather than the lengths of the individual half-sarcomeres ., It is thus the in vitro equivalent of the computer simulations discussed in this work in which xT , the total length of a defined group of half-sarcomeres , is the controlled variable ., Many biologists probably regard it as axiomatic that the properties of a muscle fiber vary along its length ., After all , organelles , such as nuclei and mitochondria , are localized structures that are not uniformly \u2018smeared\u2019 throughout the cell ., There are , of course , other sorts of non-uniformity in muscle cells as well ., There is good evidence to suggest , for example , that eye muscle fibers express different myosin isoforms along their length 29 and that sarcomeres near the end of a fiber are shorter than those near the middle 30 ., Many quantitative models of muscle on the other hand overlook variability within muscle fibers and attribute the mechanical properties of an experimental preparation to the scaled behavior of a single population of cycling cross-bridge that is sometimes arranged in parallel with a passive mechanical component ., These reductionist theories have been outstandingly successful at explaining the behavior observed in some specific experiments 31 but the simulations presented in this work suggest that more realistic multi-scale modeling may be required to fully reproduce the behavior of whole muscle fibers ., Multi-scale modeling may be particularly helpful in studies of muscle disease ., It is well known , for example , that muscle function is compromised in muscular dystrophy where the primary defect occurs in a large structural protein 32 ., Defects in such proteins will affect the way that forces are transmitted between and around myofibrils which , as shown in Fig 4 , may significantly alter a muscles mechanical behavior ., This concept is also supported by experimental data ., Shimamoto et al . 25 recently showed , for example , that modifying Z-disk structure with antibodies can influence the emergent properties of a myofibrillar preparation by altering the way that half-sarcomeres interact ., Finally , the simulations shown in Fig 5C demonstrate that the relatively small amount of half-sarcomere heterogeneity produced by increasing \u03b1 from 0 . 0 to 0 . 1 dramatically alters the mechanical properties of the muscle framework ., Further increases in \u03b1 produce more half-sarcomere heterogeneity but do not substantially alter the predicted force response ., This is a very interesting finding because it implies that the mechanical properties of a muscle that was originally perfectly uniform would change markedly if localized structural and\/or proteomic abnormalities developed as a result of a disease process and\/or unusual mechanical stress ., The mechanical properties of a muscle cell that was already slightly heterogeneous on the other hand would not be substantially altered by additional irregularities ., This could be a significant advantage for a living cell that is continually repairing itself and which is potentially subject to damaging stimuli and large external forces ., Muscle cells may have evolved to become fault-tolerant systems ., The mathematical modeling presented in this work suggests that muscle fibers may exhibit emergent mechanical properties that reflect interactions between half-sarcomeres ., If this is indeed the case , systems-level approaches will tbe required to explain how known proteomic and structural heterogeneities influence function in normal and diseased tissue ., Animal use was approved by the University of Wisconsin-Madison Institutional Animal Care and Use Committee ., All of the experimental records shown in this work were collected by the author in Dr . Richard Mosss laboratory at the University of Wisconsin-Madison ., Full details of the experimental procedures and some of the records have already been published 7 , 14 ., Animal use was approved by the relevant Institutional Animal Care and Use Committee ., The structural framework studied in this work ( Fig, 2 ) consisted of nm parallel chains of myofibrils , each of which was itself composed of nhs half-sarcomeres arranged in series ., Every second Z-line was linked to the corresponding Z-line in each of the other myofibrils by a linear elastic spring of stiffness kim ., These connections simulated the mechanical effects of proteins such as desmin that connect myofibrils at Z-disks 33 ., The force within each half-sarcomere ( Fhs ) was the sum of Fpas , a \u2018passive\u2019 elastic force due to the mechanical elongation of structural molecules such as titin , and Fact , an \u2018active\u2019 force produced by ATP-dependent cross-bridge cycling 34 , 35 ., ( 1 ) Fpas was a single-valued function of the length ( xhs ) of each half-sarcomere ., Fact was more complicated and depended on the half-sarcomeres preceding motion ., Both force components are described in more detail below ., The mechanical behavior of the multi-half-sarcomere framework was simulated by assuming that ( 1 ) the force in a given myofibril was the same at every point along its length and ( 2 ) the sum of the lengths of the half-sarcomeres in each myofibril was equal to the total muscle length ., These assumptions lead to a set of functions ( 2 ) where Fhs , i , j and xhs , i , j respectively describe the force developed by and the length of half-sarcomere i in myofibril j , Fm , j is the force in myofibril j and xT is the total length of the framework ( Fig 2 ) ., These functions can be solved using a root-finding method ( see Numerical Methods section below ) to yield the lengths of each half-sarcomere and thus the mechanical state of the framework ., Fact values for each half-sarcomere in the framework were calculated using techniques previously described for a single half-sarcomere model by Campbell & Moss 7 ., Myosin heads were assumed to cycle through the 3-state kinetic scheme shown in Fig 7 ., The proportion p ( xhs ) of cross-bridges participating in the kinetic scheme in each half-sarcomere was set to zero for all xhs during simulations of passive muscle ( pCa 9 . 0 conditions ) ., In simulations of activate muscle ( pCa 4 . 5 conditions ) , p ( xhs ) was assumed to scale with the number of myosin heads overlapping the thin filament ( Fig 8A ) so that ( 3 ) where xoverlap is lthin+lthick\u2212xhs , xmaxoverlap is lthick\u2212lbare , and lthin , lthick , and lbare are the lengths of the thin filaments ( 1120 nm ) , thick filaments ( 815 nm ) and thick filament bare zone ( 80 nm ) respectively and \u03bbfalloff is a model parameter arbitrarily set to 0 . 005 nm\u22121 ., The rate constants defining the probability of a cross-bridge moving to a different biochemical state depended on the length x of the cross-bridge link and twelve model parameters ( Table, 2 ) that were determined by fitting the simulated force values to representative data records using multidimensional optimization techniques ( see below ) ., The spring constant kcb for an individual cross-bridge link was defined as 0 . 0016 N m\u22121 in close agreement with recent experimental estimates for this parameter 36 , 37 ., Energies for the cross-bridge states ( Fig 8B ) were defined as ( 4 ) where x is the length of the cross-bridge link , xps is the length of the force-generating power-stroke and A1 , base and A2 , base define the minimum energy of cross-bridge links bound in the A1 and A2 states respectively ., The energy difference between the ED and ED\u2032 states ( Fig 8B ) was 25 kBT where kB is Boltzmanns constant ( 1 . 381\u00d710\u221223 J K\u22121 ) and T was 288 K . ( The original experiments were performed at 15\u00b0C 7 , 37 ) ., Strain-dependent rate functions f12 ( x ) , f23 ( x ) and f31 ( x ) for the forward transitions ( Fig 7 ) were defined as ( 5 ) Reverse rate functions g21 ( x ) , g32 ( x ) and g13 ( x ) were defined in terms of the forward rate functions and the energy difference between the relevant states 38 as ( 6 ) Panels B , C and D in Fig 8 show the strain-dependence of the free energy diagram for the cross-bridge scheme , the forward rate functions and the reverse rate functions used in the simulations shown in Fig 3 ., The numerical values of the relevant parameters are listed in the third column in Table 2 ., The number of myosin heads per unit cross-sectional area in a single half-sarcomere framework was always N0 ( defined in this work as 1 . 15\u00d71017 m\u22122 36 ) ., Half-sarcomere heterogeneity was incorporated into the simulations of multiple half-sarcomere frameworks by assuming that the number of myosin heads per half-sarcomere was a normally distributed variable ., Thus the actual number ( Ni ) of myosin heads participating in the cross-bridge cycle in half-sarcomere i at half-sarcomere length xhs was equal to ( 7 ) where Gi ( \u03b1 ) was a variable randomly selected from a Gaussian distribution with mean of unity and a variance of \u03b1 ., The passive force Fpas increased in a non-linear manner as ( 8 ) where \u03c3 , xoffset and L were determined by curve-fitting 7 , 14 , with the exception of one set of simulations ., Fig 6A shows force records simulated with a passive force that increased linearly with half-sarcomere length as ( 9 ) where kpas defines the stiffness of the passive elastic spring and xslack is the half-sarcomere length at which the spring falls slack ., Filament compliance effects 39 , 40 were incorporated by assuming that if a half-sarcomere changed length by \u0394xhs in a given time-step each cross-bridge link in the half-sarcomere changed length by \u00bd\u0394xhs 11 ., This over-simplifies the realignment of actin binding sites and myosin heads that occurs in real muscle fibers but the finite availability of computing power means that it is not yet practical to implement more realistic simulations of filament compliance effects 41\u201344 with a framework containing 300 half-sarcomeres ., The mathematical model was implemented as a multi-threaded console application ( Visual Studio 2005 , Microsoft , Redmond , WA ) written in C++ ., Equation 2 was solved using the newt ( ) function described by Press et al . 45 which invokes Newtons method to solve non-linear sets of functions ., \u0394x for cross-bridge populations 7 was set to 0 . 5 nm ., The time-step was set to 1 ms . Reducing these parameters by 50% did not materially change the results of the calculations ., Calculated rate constants ( Eqs 5 and 6 ) were constrained to a maximum value of 500 s\u22121 ., Rate constants were set to zero if the calculated value was less than 0 . 01 s\u22121 ., This simplified the numerical procedures used to solve the evolution of the cross-bridge populations ., Randomly-distributed double-precision numbers were generated using the Mersenne Twister Algorithm 46 ., Post-processing of simulation output files and subsequent figure development was performed using custom-written MATLAB ( The Mathworks , Nattick , MA ) software ., Particle swarm optimization routines 47 were used to fit the force traces predicted by the simulations to selected experimental records ., This was done by searching for the lowest attainable value of an error function defined as ( 10 ) where Fexpt , i is the experimentally-recorded force value at time-point i and F ( \u03a6 ) predict , i is the corresponding prediction for parameter set \u03a6 ., Solving Eq 2 for a framework with nm\\u200a=\\u200a6 and nhs\\u200a=\\u200a50 took \u223c0 . 25 s on a quad-core 2 . 5 GHz personal computer ., Each simulated force response ( of order 103 time-steps with 1 ms resolution ) therefore required \u223c5 minutes to compute ., To reduce the wall-time required for the parameter estimation procedures , the calculations were performed using spare screen-saver processing time on \u223c30 computers running DEngine ( for Distributed computing ENGINE ) software developed by the author ( http:\/\/www . dengine . org ) ., This arrangement allowed typical optimization tasks to be completed using a particle swarm algorithm 47 in \u223c2 days ( \u223c10 times faster ","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Most reductionist theories of muscle attribute a fibers mechanical properties to the scaled behavior of a single half-sarcomere ., Mathematical models of this type can explain many of the known mechanical properties of muscle but have to incorporate a passive mechanical component that becomes \u223c300% stiffer in activating conditions to reproduce the force response elicited by stretching a fast mammalian muscle fiber ., The available experimental data suggests that titin filaments , which are the mostly likely source of the passive component , become at most \u223c30% stiffer in saturating Ca2+ solutions ., The work described in this manuscript used computer modeling to test an alternative systems theory that attributes the stretch response of a mammalian fiber to the composite behavior of a collection of half-sarcomeres ., The principal finding was that the stretch response of a chemically permeabilized rabbit psoas fiber could be reproduced with a framework consisting of 300 half-sarcomeres arranged in 6 parallel myofibrils without requiring titin filaments to stiffen in activating solutions ., Ablation of inter-myofibrillar links in the computer simulations lowered isometric force values and lowered energy absorption during a stretch ., This computed behavior mimics effects previously observed in experiments using muscles from desmin-deficient mice in which the connections between Z-disks in adjacent myofibrils are presumably compromised ., The current simulations suggest that muscle fibers exhibit emergent properties that reflect interactions between half-sarcomeres and are not properties of a single half-sarcomere in isolation ., It is therefore likely that full quantitative understanding of a fibers mechanical properties requires detailed analysis of a complete fiber system and cannot be achieved by focusing solely on the properties of a single half-sarcomere .","summary":"Quantitative muscle biophysics has been dominated for the last 60 years by reductionist theories that try to explain the mechanical properties of an entire muscle fiber as the scaled behavior of a single half-sarcomere ( typical muscle fibers contain \u223c106 such structures ) ., This work tests the hypothesis that a fibers mechanical properties are irreducible , meaning that the fiber exhibits more complex behavior than the half-sarcomeres do ., The key finding is that a system composed of many interacting half-sarcomeres has mechanical properties that are very different from that of a single half-sarcomere ., This conclusion is based on the results of extensive computer modeling that reproduces the mechanical behavior of a fast mammalian muscle fiber during an imposed stretch without requiring that titin filaments become more than 3-fold stiffer in an activated muscle ., This work is significant because it shows that it is probably not sufficient to attribute functional properties of whole muscle fibers solely to the behavior of a single half-sarcomere ., Systems-level approaches are therefore likely to be required to explain how known structural and biochemical heterogeneities influence function in normal and diseased muscle tissue .","keywords":"mathematics, physiology\/muscle and connective tissue, biophysics\/theory and simulation, physiology\/motor systems, computational biology\/systems biology","toc":null} +{"Unnamed: 0":525,"id":"journal.pcbi.1003123","year":2013,"title":"Predicting Network Activity from High Throughput Metabolomics","sections":"Knowledge of many metabolic pathways has accumulated over the past century ., For instance , glycolysis , citric acid cycle and oxidative phosphorylation fuel cellular processes through the generation of adenosine triphosphate; glycans and cholesterols not only serve as structural blocks but also mediate intercellular communication ., In fact , metabolites pervade every aspect of life 1 , 2 ., Their roles are increasingly appreciated , as advancing tools allow deeper scientific investigations ., The most notable progresses in recent years come from metabolomics and genome-scale metabolic models ., Metabolomics is the emerging field of comprehensive profiling of metabolites ., As metabolites are the direct readout of functional activity , metabolomics fills in a critical gap in the realm of systems biology , complementing genomics and proteomics 3\u20136 ., The technical platforms of metabolomics are mainly based on mass spectromety and nuclear magnetic resonance 4 , 7 ., Among them , untargeted LC\/MS ( liquid chromatography coupled mass spectrometry ) , especially on high resolution spectrometers , produces unparalleled throughput , measuring thousands of metabolite features simultaneously 5 , 8\u201310 ., On the other hand , genome-scale metabolic models have been largely driven by genomics , as the total list of metabolic enzymes of a species can be derived from its genome sequence 11 , 12 ., The reconstruction of microbial metabolic network models is an established process 13 , 14 ., Intense manual curation , however , was required in the building of two high-quality human models 15 , 16 , which were followed by a number of derivatives 17\u201320 ., The coverage of these metabolic models greatly exceeds the conventional pathways ., Even though they are a perfect match in theory , metabolomics and genome-scale metabolic models have had little overlap so far ., The use of metabolomics in building metabolic models is rare 21 , due to the scarcity of well annotated metabolomics data ., The application of genome-scale metabolic models to metabolomics data is not common either 22 ., The limited throughput of targeted metabolomics usually does not motivate large scale network analysis ., Untargeted metabolomics cannot move onto pathway and network analysis without knowing the identity of metabolites ., A typical work flow of untargeted metabolomics is illustrated in Figure 1A ., After ionized molecules are scanned in the spectrometer , the spectral peaks are extracted , quantified and aligned into a feature table ., At this point , each feature is identified by a mass-to-charge ratio ( m\/z ) and retention time in chromatography , but its chemical identity is not known ., To assign a spectral feature to a bona fide metabolite , it usually involves tandem mass spectrometry to examine the fragmentation pattern of a specific feature , or coelution of isotopically labeled known references - both are inherently low throughput ., Considerable effort is needed to build a spectral library , which is often of limited size and interoperability ., Thus , metabolite identification forms the bottleneck of untargeted metabolomics 23 ., A number of informatics tools have been developed for LC\/MS metabolomics , ranging from feature extraction 24\u201326 , pathway analysis and visualization 27\u201329 to work flow automation 30\u201332 ., Yet , whereas pathway and network analysis is concerned , the existing tools require identified metabolites to start with ., Computational prediction of metabolite identity , based on m\/z alone , is deemed inadequate as a single m\/z feature can match multiple metabolites even with high instrumental accuracy 33 , 34 , and multiple forms of the same metabolite often exist in the mass spectra 35 ., Although automated MS\/MS ( tandem mass spectrometry ) search in databases is improving the efficiency of metabolite identification 36 , 37 , this requires additional targeted experiments and relies on extensive databases , where data from different platforms often do not match ., How to bring untargeted metabolomics data to biological interpretation remains a great challenge ., In this paper , we report a novel approach of predicting network activity from untargeted metabolomics without upfront identification of metabolites , thus greatly accelerating the work flow ., This is possible because the collective power in metabolic networks helps resolve the ambiguity in metabolite prediction ., We will describe the computational algorithms , and demonstrate their application to the activation of innate immune cells ., The genome-scale human metabolic network in mummichog is based on KEGG 38 , UCSD Recon1 15 and Edinburgh human metabolic network 16 ., The integration process was described previously 39 ., The organization of metabolic networks has been described as hierarchical and modular 40 ., When we perform a hierarchical clustering on the metabolic reactions in our network , its modular structure is clear ( Figure 2A ) ., This modular organization , as reported previously 41 , often but not always correlates with conventional pathways ( Figure 2B ) ., The module definition in this work is adopted from Newman and Girvan 42 , 43 , where a module is a subnetwork that shows more internal connections than expected randomly in the whole network ., Modules are less biased than pathways , which are defined by human knowledge and conventions , and outgrown by genome-scale metabolic networks ., Activity of modules may exist within and in between pathways ., Deo et al 22 convincingly demonstrated the advantage of unbiased module analysis over pathways ., On the other hand , pathways have built-in human knowledge , which may be more sensitive under certain scenarios ., Pathway analysis and module analysis are rather complementary , and both are included in mummichog ., The reference metabolic network model contains both metabolites and enzymes ., Since metabolomics only measures metabolites , the model is converted to a metabolite centric network for statistical analysis ., Enzymes are only added later in the visualization step to aid user interpretation ., Within the predefined reference metabolic network model , mummichog searches for all modules that can be built on user input data , and compute their activity scores ., This process is repeated many times for the permutation data to estimate the background null distribution ., Finally , the statistical significance of modules based on user data is calculated on the null distribution ., The specific steps are as follows: The basic test for pathway enrichment here is Fishers exact test ( FET ) , which is widely used in transcriptomic analysis ., The concept of FET is , when we select features ( ) from a total of features ( ) , and find of the features present on a pathway of size , the chance of getting in theory can be estimated by enumerating the combinations of , and ., To apply FET to an enrichment test of metabolic features on pathways , we need to understand the additional layer of complexity ., Our metabolic features can be enumerated either in the m\/z feature space or in the metabolite ( true compound ) space ., Since metabolic pathways are defined in the metabolite space , either way needs to factor in the many-to-many mapping between m\/z features and metabolites ( Figure S1 ) ., This mapping is effectively covered in our permutation procedure , which starts from the m\/z features and reruns the mapping every time ., The overall significance of a pathway enrichment is estimated based on a method by Berriz et al 44 , which ranks the p-value from real data among the p-values from permutation data to adjust for type I error ., Finally , a more conservative version of FET , EASE , is adopted to increase the robustness 45 ., The key idea of EASE is to take out one hit from each pathway , thus preferentially penalize pathways with fewer hits ., The specific steps are as follows: Both the module analysis and pathway analysis above serve as a framework to estimate the significance of functional activities ., In return , the predicted metabolites in significant activities are more likely to be real ., Mummichog collects these metabolites , and look up all their isotopic derivatives and adducts in ., A confidence rating system is applied to filter for qualified metabolites ., For instance , if both the single-charged form M+H1+ and the form M ( C13 ) +H1+ are present , this metabolite prediction carries a high confidence ., All the qualified metabolites carry over their connections in the reference metabolic network , and form the \u201cactivity network\u201d for this specific experiment ( e . g . Figure 3 ) ., The activity network gears towards a quality and clear view of user data , as modules and pathways can be redundant and fragmented ., It also accentuates the activity in a specific experimental context , in contrast to the generic nature of the reference metabolic network ., We next illustrate the application of these algorithms to a novel set of immune cell activation data , and two published data sets on human urinary samples and yeast mutants ., The innate immunity plays a critical role in regulating the adaptive immunity , and the field was recognized by the 2011 Nobel Prize in Physiology or Medicine 46 ., According to the nature of stimuli , innate immune cells direct different downstream molecular programs , which are still under intense scientific investigation 47 , 48 ., In this study , we examine the metabolome of human monocyte-derived dendritic cells ( moDC ) under the stimulation of yellow fever virus ( YF17D , a vaccine strain ) ., We have shown previously that yellow fever virus activates multiple toll-like receptors , and induces cellular stress responses 49\u201351 ., This data set is , to our knowledge , the first high throughput metabolomics on any immune cells ( macrophages were previously studied by limited throughput ) ., The cell extracts from our activation experiment were analyzed by LC\/MS metabolomics , and yielded 7 , 995 spectral features ( denoted as ) after quality control ., Among them , 601 features were significantly different between the infected samples and both the baseline and time-matched mock controls ( , student t-test; denoted as ) ., Using and , mummichog computes significant pathways and modules and the activity network ., Viral infection induced a massive shift of metabolic programs in moDCs ( pathways in Table S1 , modules in Figure S2 ) ., The predicted activity network is shown in Figure 3A , and we will focus our investigation on a small subnetwork ( Figure 3B ) , which includes the metabolisms of nucleotides , glutathione\/glutathione disulfide and arginine\/citrulline ., Nucleotides are required for viral replication , and the hijacking of host nucleotide metabolism by virus has been well described 52\u201354 ., Glutathione is best known as intracellular antioxidant , where it is oxidized to glutathione disulfide ( GSSG ) ., However , our data show that both glutathione and GSSG are depleted in activated moDCs , departing from this conventional wisdom ., The across-the-board depletion is consistent with the down-regulation of genes for glutathione synthesis ( Figure 4B ) ., Our data support the notion that glutathione is released by dendritic cells and conditions the extracellular microenvironment during their interaction with T cells 55\u201357 ., Arginine is known to be an important regulator in innate immune response 58 , 59 ., Arginine metabolism can lead to two pathways: to ornithine ( catalyzed by arginase ) or to citrulline ( catalyzed by nitric oxide synthase ) ., The decrease of arginine and increase of citrulline suggests the latter pathway , which is the main reaction of producing intracellular nitric oxide ., We indeed detected the inhibition of eNOS and iNOS expression later ( Figure 4C ) , a well documented feedback effect by nitric oxide 60 ., We also performed tandem mass spectrometry on the metabolites in Figure 3B , using authentic chemicals as references ., All the metabolites , except glutamylcysteine and thyroxine , were confirmed ( Figure 5 , Figure S3 ) ., The depletion of arginine and accumulation of citrulline in moDC was also triggered by dengue virus but not by lipopolysaccharide ( LPS , Figure S4 ) ., It is known that iNOS is induced in dendritic cells by LPS but not by virus 47 , 61 ., Our data suggest a different nitric oxide pathway in viral infection , driven by constitutive nitric oxide synthases ., The intracellular nitric oxide has a fast turnover and we did not detect its accumulation by fluoremetric assays ( data not shown ) ., We previously demonstrated that the phosphorylation of EIF2A was induced by YF17D 50 ., An upstream mechanism is now suggested by this metabolomic experiment , as both the production of nitric oxide and depletion of arginine induce the activity of EIF2A kinases 62 ., The nature of metabolomics data often varies by platforms and sample types ., We thus extend our mummichog approach to two published data sets on human urinary samples 63 and on yeast cell extracts 64 ., Both data sets carry metabolite annotation by the original authors , which can be used to evaluate the prediction by mummichog ., The human urinary data contained both formal identification by matching to local library of chemical references and putative identification by combining multiple public resources 63 ., We used mummichog to investigate the gender difference in this data set , and predicted an activity network of 45 metabolites ., Among them , 13 were not found in the original annotation ., For the remaining metabolites , 97% ( 31\/32 ) were agreed between mummichog and the original annotation ( Figure 6 ) ., There is an option in mummichog to enforce the presence of M+H+ form ( for positive mode , M\u2212H\u2212 for negative mode ) in metabolite prediction ., With this option , 3 out of 44 metabolites were not in the original annotation , and the remaining 41 metabolites were in 100% agreement ., The mummichog algorithms are not tied to a specific metabolic model ., We adopted the yeast metabolic model from BioCyc database 11 for the yeast data 64 , to predict an activity network contrasting mutant and wild type strains ., This data set was only annotated for 101 metabolites through the authors local library ., As a result , the majority of metabolites in the predicted network by mummichog were not found in the original annotation ., Out of the remaining 28 metabolites , 24 ( 86% ) were agreed between mummichog and the original annotation ( Figure 6 ) ., Enforcing the presence of primary ion M\u2212H\u2212 ( data collected in negative ion mode ) had little effect to the result , since the original annotation was already biased to metabolites that are ionized easily ., These results show that the prediction by mummichog is robust cross platforms and sample types ., Critical to the success of mummichog is the integration of genome-scale metabolic models ., We have used in this study a recent human metabolic model ., An alternative human model from BioCyc 11 produced comparable results ( Figure S6 ) ., The coverage of the models in all three case studies is shown in Table 1 ., These genome-scale metabolic models are more extensive than conventional pathways , and were shown to capture activities in between pathways 22 ., The pathway organizations differ between the two human models , as the BioCyc model tends to use smaller pathways ., This creates some model dependency in the pathway analysis , but little effect to the \u201cactivity network\u201d , as mummichog is more network centric ., The two test cases in Figure 6 also indicate that these models tend to capture more information than conventional annotations ., However , as mentioned earlier , the new data from metabolomic studies are yet to be integrated into these genome-scale metabolic models ., For example , a number of metabolites in metabolomics databases 36 , 65 , 66 are not in any of these metabolic models ., In general , the features from a high resolution profiling experiment by far exceed the current annotations in metabolite databases ., This leads to a de facto filtering when data are run on mummichog ( similiar situation in database searches ) ., Meanwhile , the features that can be mapped to the current metabolic model are more likely to be biologically relevant ., This \u201cfiltering\u201d is pertinent to the metabolic model , not to mummichog algorithms - mummichog still has to choose the true metabolites from multiple possible candidates ( Figure S1B ) ., It will be an important future direction to advance metabolic modeling with the chemical data ., We also expect the metabolic models to improve on lipid annotation , physiological context and tissue specificity ., As lessons learned from transcriptomics , pathway and network analysis not only provides functional context , but also the robustness to counteract noises at individual feature level , which are commonly seen in omics experiments ., Similarly , the prediction on activity by mummichog is tolerant to errors at individual feature level ., In the moDC data , we chose by a cutoff value ., When we vary this cutoff from to , the program returned networks of a stable set of metabolites ( Figure S7 ) ., The module finding procedure in the program was designed to extensively sample subnetwork structures ., Among the modules will be many variations , but the subsequenct \u201cactivity network\u201d will collapse on stable results ., In deed , we tested an alternative algorithm of modularization 67 , and it returned almost identical predicted networks , in spite of moderately different intermediate modules ( Figure S8 ) ., In theory , there are merits to incorporate a statistical matrix from the feature selection step prior to mummichogs analysis and mass flow balance of metabolic reactions 22 , 68 ., While these are appealing directions for future research , the current version of mummichog confers some practical robustness , such as tolerance to technological noise and biological sampling limitation ., For example , mass balance is almost impossible within serum or urine samples , because the reactions producing these metabolites are likely to occur in other tissues ., The number of overlap metabolites is used in the enrichment calculation in both module analysis and pathway analysis ., Sometimes , a single m\/z feature may match to several metabolites in the same module\/pathway , inflating the overlap number ., Thus , mummichog always compares the number of overlap metabolites and the number of corresponding m\/z features , and uses the smaller number for enrichment calculation , since the smaller number is more likely to be true ., The size of each metabolic pathway is defined by the number of metabolites in the pathway ., mummichog uses only the metabolites that can be matched in to define a pathway size , because this reflects the analytical coverage of the experiment and is confined by the same coverage ., Overall , mummichog uses the whole feature list from an experiment for resampling , therefore the computation of statistical significances effectively circumvents analytical biases ., In spite of the fantastic progress in mass spectrometry , these are the early days of metabolomics ., Effective computational integration of resources , the combination of cheminformatics and bioinformatics , will greatly benefit the field 69 , 70 ., As data accumulate , further method refinement will become possible ., Mummichog presents a practical solution of one-step functional analysis , bypassing the bottleneck of upfront metabolite identification ., It trades off some sensitivity in the conventional approach for the much accelerated work flow of high throughput LC\/MS metabolomics ., Mummichog is not designed to replace tandem mass spectrometry in metabolite identification ., It is the biological activity not metabolites per se that mummichog predicts ., Even with some errors on individual metabolites , as long as the biology is pinpointed to a subnetwork structure , investigators can focus on a handful of validations , steering away from the lengthy conventional work flow ., In conclusion , we have demonstrated that mummichog can successfully predict functional activity directly from a spectral feature table ., This benefits from the convergence of genome-scale metabolic models and metabolomics ., Mummichog will continue to improve as the metabolic network models evolve ., We expect this method to greatly accelerate the application of high throughput metabolomics ., The mummichog software is available at http:\/\/atcg . googlecode . com ., Human peripheral blood mononuclear cells ( PBMCs ) were isolated from Buffy coats by separation over a Lymphoprep gradient ., CD14+ monocytes were isolated from the PBMCs with MACS beads ( Miltenyi Biotec , Auburn , CA ) and cultured for 7 days with 20 ng\/ml GM-CSF and 40 ng\/ml IL-4 ( Peprotech , Rocky Hill , NJ ) ., MoDCs were then harvested , washed twice and resuspended in serum-free medium ., MoDCs ( ) were stimulated in triplicate in 48-well plates in a 200 \u00b5L volume with Yellow Fever virus ( M . O . I . of 1 ) , or mock infected ., After 2 hrs , 800 \u00b5L of 10% FBS-RPMI was added to all wells ., MoDCs were harvested at 6 hr or 24 hr after infection and centrifuged ., Supernatants were aspirated , and dry cell pellets were frozen at \u221280\u00b0C ., Supernatants of moDC cultures were assayed for the concentration of IL-6 and TNF using ELISA kits ( BD , San Diego , CA ) ., Three biological replicates were used for LC\/MS and QPCR ., Full scan LC\/MS ( m\/z range 85\u20132000 ) was performed essentially as previously described 8 ., Cell extracts or supernatants were treated with acetonitrile ( 2\u22361 , v\/v ) and centrifuged at 14 , 000\u00d7 g for 5 min at 4\u00b0C to remove proteins ., Samples were maintained at 4\u00b0C in an autosampler until injection ., A Thermo Orbitrap-Velos mass spectrometer ( Thermo Fisher , San Diego , CA ) coupled with anion exchange chromatography was used for data collection , via positive-ion electrospray ionization ( ESI ) ., Metabolites of interest were identified by tandem mass spectrometry on a LTQ-FTMS , where the biological sample , biological sample spiked in with authentic chemical and authentic chemical reference were run sequentially ., The and were done in the ion trap of the LTQ-FTMS , with an isolation width of 1 amu and a normalized collision energy of 35 eV ., The LC\/MS data were processed with apLCMS program 25 for feature extraction and quantification ., Significant features were also verified by inspecting the raw data ( Figure S5 ) ., Features were removed if their intensity is below 10 , 000 in every sample class ., Missing intensity values were imputed to 1000 ., The intensities were log2 transformed ., Low quality features were further filtered out if their averaged in-class coefficient of variation was greater than 0 . 2 ., Averaged ion intensity over three machine replicates was used for subsequent analysis ., These 7 , 995 features constituted the reference list ., No normalization was used because total ion counts in all samples were very similar ., Students t-test was used to compare infected samples ( at 6 hr ) versus mock infections ( at 6 hr ) , and infected samples ( at 6 hr ) versus baseline controls ( at 0 hr ) ., Features with in both tests were included in the significant list ., The feature table , , and predictions are given in Dataset S1 ., For gene expression quantification , mRNA was extracted by RNeasy Mini Kit ( Qiagen ) according to manufacturers protocol , where the cell lysate was homogenized by QIAshredder spin columns ., Reverse transcription was performed with SuperScript III reverse transcriptase and oligo-dT ( Invitrogen ) according to manufacturers recommendation ., Real-time PCR was performed on a MyiQ Icycler ( BioRad ) , using SYBR Green SuperMix ( Quanta Biosciences ) ., The PCR protocol used 95\u00b0C 3 mins; 40 cycles of 95\u00b0C 30 seconds and 60\u00b0C for 60 seconds ., The amplicons were verified by melting curves ., Quantafication was performed by the method , normalized by microglobulin levels ., The primer sequences are given in Table S2 ., Data on human urinary samples 63 were retrieved from MetaboLights server 71 ., The positive ion feature table for study \u201c439020\u201d contained 14 , 720 features ., A feature is only included if its ion intensity is above 100 , 000 in 5 or more samples ., This leaves 11 , 086 features , which consist for this study ., Data were normalized by total ion counts ., We next compared the metabolite difference between females ( 8 samples of low testosterone glucuronide level ) and males ( 11 samples of high testosterone glucuronide level ) ., is consisted of 524 features ( by student t-test ) ., The original authors annotated 3 , 689 metabolite features , and their annotation was used to compare with the prediction by mummichog ., The yeast data 64 were downloaded from MAVEN website 32 in mzXML format ., Feature extraction was performed in MAVEN through two approaches: targeted approach and untargeted approach ., The targeted approach used chemical library from the same lab and produced 177 annotated features , which corresponded to 101 metabolites ., The untargeted approach extracted 6318 features without annotation ., After the same processing procedure as in our moDC data , contained 5707 features ., We thus used mummichog to predict on the untargeted data , and compared the result to MAVEN annotation ., The consisted of 426 features that were significantly different between the prototrophic wild type and the auxotrophic mutant ( by student t-test ) ., The yeast metabolic model was compiled from BioCyc data 11 .","headings":"Introduction, Results, Discussion, Methods","abstract":"The functional interpretation of high throughput metabolomics by mass spectrometry is hindered by the identification of metabolites , a tedious and challenging task ., We present a set of computational algorithms which , by leveraging the collective power of metabolic pathways and networks , predict functional activity directly from spectral feature tables without a priori identification of metabolites ., The algorithms were experimentally validated on the activation of innate immune cells .","summary":"Mass spectrometry based untargeted metabolomics can now profile several thousand of metabolites simultaneously ., However , these metabolites have to be identified before any biological meaning can be drawn from the data ., Metabolite identification is a challenging and low throughput process , therefore becomes the bottleneck of the filed ., We report here a novel approach to predict biological activity directly from mass spectrometry data without a priori identification of metabolites ., By unifying network analysis and metabolite prediction under the same computational framework , the organization of metabolic networks and pathways helps resolve the ambiguity in metabolite prediction to a large extent ., We validated our algorithms on a set of activation experiment of innate immune cells ., The predicted activities were confirmed by both gene expression and metabolite identification ., This method shall greatly accelerate the application of high throughput metabolomics , as the tedious task of identifying hundreds of metabolites upfront can be shifted to a handful of validation experiments after our computational prediction .","keywords":"systems biology, metabolic networks, biology, computational biology","toc":null} +{"Unnamed: 0":814,"id":"journal.pcbi.1003595","year":2014,"title":"Agent-Based Modeling of Oxygen-Responsive Transcription Factors in Escherichia coli","sections":"The bacterium Escherichia coli is a widely used model organism to study bacterial adaptation to environmental change ., As an enteric bacterium , E . coli has to cope with an O2-starved niche in the host and an O2-rich environment when excreted ., In order to exploit the energetic benefits that are conferred by aerobic respiration , E . coli has two major terminal oxidases: cytochrome bd-I ( Cyd ) and cytochrome bo\u2032 ( Cyo ) that are encoded by the cydAB and cyoABCDE operons , respectively 1 , 2 ., Cyd has a high affinity for O2 and is induced at low O2 concentrations ( micro-aerobic conditions ) , whereas Cyo has a relatively low affinity for O2 and is predominant at high O2 concentrations ( aerobic conditions ) 3 ., These two terminal oxidases contribute differentially to energy conservation because Cyo is a proton pump , whereas Cyd is not 1 , 2; however , the very high affinity of Cyd for O2 allows the bacterium to maintain aerobic respiration at nanomolar concentrations of O2 , thereby maintaining aerobic respiratory activity rather than other , less favorable , metabolic modes 4\u20136 ., The transcription factors , ArcA and FNR , regulate cydAB and cyoABCDE expression in response to O2 supply 7 ., FNR is an iron-sulfur protein that senses O2 in the cytoplasm 8 , 9 ., In the absence of O2 the FNR iron-sulfur cluster is stable and the protein forms dimers that are competent for site-specific DNA-binding and regulation of gene expression 10 ., The FNR iron-sulfur cluster reacts with O2 in such a way that the DNA-binding dimeric form of FNR is converted into a non-DNA-binding monomeric species 10 ., Under anaerobic conditions , FNR acts as a global regulator in E . coli 11\u201313 , including the cydAB and cyoABCDE operons , which are repressed by FNR when the O2 supply is restricted 7 ., Under aerobic conditions , repression of cydAB and cyoABCDE is relieved and Cyd and Cyo proteins are synthesized 3 ., In contrast , ArcA responds to O2 availability indirectly via the membrane-bound sensor ArcB ., In the absence of O2 ArcB responds to changes in the redox state of the electron transport chain and the presence of fermentation products by autophosphorylating 14\u201316 ., Phosphorylated ArcB is then able to transfer phosphate to the cytoplasmic ArcA regulator ( ArcA\u223cP ) , which then undergoes oligomerization to form a tetra-phosphorylated octomer that is capable of binding at multiple sites in the E . coli genome 17 , 18 , including those in the promoter regions of cydAB and cyoABCDE to enhance synthesis of Cyd and inhibit production of Cyo 7 , 17 ., Because the terminal oxidases ( Cyd and Cyo ) consume O2 at the cell membrane , a feedback loop is formed that links the activities of the oxidases to the regulatory activities of ArcA and FNR ( Figure 1 ) ., These features of the system - combining direct and indirect O2 sensing with ArcA\u223cP and FNR repression of cyoABCDE , and ArcA\u223cP activation and FNR repression of cydAB - result in maximal Cyd production when the O2 supply is limited ( micro-aerobic conditions ) and maximal Cyo content when O2 is abundant ( aerobic conditions ) 3 ., Although the cellular locations of the relevant genes ( cydAB and cyoABCDE ) , the regulators ( ArcBA and FNR ) and the oxidases ( Cyd and Cyo ) are likely to be fundamentally important in the regulation of this system , the potential significance of this spatial organization has not been investigated ., Therefore , a detailed agent-based model was developed to simulate the interaction between O2 molecules and the electron transport chain components , Cyd and Cyo , and the regulators , FNR and ArcBA , to shed new light on individual events within local spatial regions that could prove to be important in regulating this core component of the E . coli respiratory process ., The dynamics of the system were investigated by running the simulation through two cycles of transitions from 0\u2013217% AU ., Figure 3a shows a top view of a 3-D E . coli cell at 0% AU ( steady-state anaerobic conditions ) ., Under these conditions , the FNR molecules are present as dimers , all ArcB molecules are phosphorylated and the ArcA is octameric ., The DNA binding sites for ArcA ( 120 in the model ) and FNR ( 350 in the model ) in the nucleoid are fully occupied ., The number of ArcA sites was chosen from the data reported by Liu and De Wulf 18 ., The model must include a mechanism for ArcA\u223cP to leave regulated promoters ., Upon introduction of O2 into anaerobic steady-state chemostat cultures \u223c5 min was required to inactivate ArcA-mediated transcription 15 ., In the agent-based model presented here , each iteration represents 0 . 2 sec ., Therefore , assuming that ArcA\u223cP leaving the 120 DNA sites is a first order process , then t\u00bd is \u223c45 sec , which is equivalent to \u223c0 . 3% ArcA\u223cP leaving the DNA per iteration ( Table 3 ) ., The number of FNR binding sites was based on ChIP-seq and ChIP-Chip measurements , which detected \u223c220 FNR sites and a genome sequence analysis that predicted \u223c450 FNR sites; thus a mid-range value of 350 was chosen 23\u201325 ., Interaction with O2 causes FNR to dissociate from the DNA ( Table 3 ) ., Under fully aerobic conditions ( 217% AU ) the FNR dimers are disassembled to monomers , and the different forms of ArcA coexist ( Figure 3b ) ., The ArcA- and FNR- DNA binding sites in the nucleoid are mostly unoccupied due to the lower concentrations of FNR dimers and ArcA octamers ., Examination of the system as it transits from 0% to 217% AU showed that the DNA-bound , transcriptionally active FNR was initially protected from inactivation by consumption of O2 at the cell membrane by the terminal oxidases and by reaction of O2 with the iron-sulfur clusters of FNR dimers in the bacterial cytoplasm - the progress of this simulation is shown in Video S1 ., This new insight into the buffering of the FNR response could serve a useful biological purpose by preventing pre-mature switching off of anaerobic genes when the bacteria are exposed to low concentration O2 pulses in the environment ., In the various niches occupied by E . coli , the bacterium can experience the full range of O2 concentrations from zero , in the anaerobic regions of a host alimentary tract , to full O2 saturation ( \u223c200 \u00b5M , equivalent to \u223c120 , 000 O2 molecules per cell ) , but fully aerobic metabolism is supported when the O2 supply exceeds 1 , 000 O2 molecules per cell ., The profiles of five repetitive simulations for each agent in the model are presented in Figure, 4 . From iteration 1 to 5000 and iteration 15000 to 20000 , O2 was supplied at a constant value of \u223c6 , 500 molecules per cell such that the total number of O2 molecules entering the cell increased linearly; when the O2 supply was stopped ( 5000 to 15000 and 20000 to 30000 iterations ) no more O2 entered the cell and thus the number of O2 molecules that had entered the cell remained unchanged during these periods ( Figure 4a ) ., When O2 became available to the cell ( from iteration 1 ) , the sensor ArcB was de-phosphorylated and started to de-phosphorylate ArcA ., Consequently , the number of ArcA octamers bound at their cognate sites in the nucleoid decreased rapidly ., The ArcA tetramers and dimers produced during de-phosphorylation of the ArcA octamer were transformed to inactive ( de-phosphorylated ) ArcA dimers , ( Figure 4d\u2013f ) ., Under aerobic conditions ( iteration 5000 ) all the ArcA was decomposed to inactive ArcA dimers ., When the O2 supply was stopped ( from iteration 5001 ) , the number of inactive ArcA dimers decreased rapidly as shown in Figure 4f , being transformed into phosphorylated ArcA dimers , tetramers and octamers ( Figure 4c\u2013e ) ., Due to the phosphorylated ArcA dimers and tetramers combining to form ArcA octamers , their numbers dropped after initially increasing ., The rate at which the ArcA octomers accumulated ( ArcA activation ) after O2 withdrawal was slower than the rate of ArcA inactivation ( Figures 4b and c ) ., In this implementation of the modeled transition cycle , the numbers of ArcA octamers in the cytoplasm and bound to DNA did not reach that observed in the initial state before the second cycle of O2 supply began , indicating that a longer period is required to return to the fermentation state ., The numbers of FNR dimer bound to binding sites and free FNR dimer ( cytoplasmic FNR dimer ) decreased when O2 was supplied to the system ( Figures 4g\u2013h ) , but the rate was slower than that for ArcA inactivation , consistent with O2 consumption at the membrane , which can be sensed by ArcB to initiate inactivation of ArcA , but lowers the signal for inactivation of FNR ., When O2 was removed from the system ( from iteration 5001 ) FNR was activated over a similar timeframe to ArcA ( Figures 4b and g ) , which was again consistent with previous observations 15 ., As with ArcA , free FNR dimers and FNR monomers did not fully return to their initial states after O2 supply was withdrawn in the model , indicating that further iterations are required to reach steady-state ( Figure 4h\u2013i ) ., These results clearly indicate that the model is self-adaptive to the changes in O2 availability , and the reproducible responses prove the reliability and robustness of the model ., The ArcBA system simulated in this model is based on a preliminary biological assumption , and the agent-based model presented here should prove a reliable and flexible platform for exploring the key components of the system and testing new experimental findings ., In order to validate the model with biological measurements of FNR DNA-binding activity estimated using an FNR-dependent lacZ reporter , the ArcBA system agents were removed from the model by setting their agent numbers to zero ., The ArcBA system is an indirect O2 sensor and does not consume O2 , hence the FNR system was not affected by withdrawing ArcBA from the model , but this simplification increased simulation speed ., The O2 step length and other model parameters were estimated using the experimental data obtained at 31% AU ., Using the estimated O2 step length at 31% AU and defining the step length of O2 molecule , , as 0 at 0% AU , a linear model , , was constructed to predict the step lengths of O2 at other AU levels , where k\\u200a=\\u200a2 . 1 and represents the O2 concentration at different AU levels ( Table 4 ) ., The O2 step lengths predicted by this model were used to validate the model at 85% , 115% and 217% AU , and the accuracy of the linear model was shown by the good correlation between the model and experimental data ., Profiles of five repetitive simulations in which the simplified model was used to predict the numbers of active FNR dimers in steady-state cultures of bacteria grown at different AU values are presented in Figure, 5 . At 31% AU , the model implied that FNR-mediated gene expression is unaffected compared to an anaerobic culture ( 0% AU ) , i . e . the number of FNR binding sites occupied in the nucleoid remained unchanged ( Figures 5a and, e ) ., Even at 85% AU , \u223c80% of the FNR-binding sites remained occupied ( Figures 5b and, f ) ., It was only when the O2 supply was equivalent to >115% AU that occupation of the FNR-binding sites in the nucleoid decreased ( Figures 5 c , d , g and h ) ., These outputs matched the FNR activities calculated from the measurements of an FNR-dependent reporter ( Table 5 ) and thus demonstrate the abilities of the model to simulate the general behavior of FNR dimers in steady-state cultures of E . coli ., A second validation approach using two FNR variants that are compromised in their ability to undergo monomer-dimer transitions was adopted ., The FNR variant FNR I151A can acquire an iron-sufur cluster in the absence of O2 , but subsequent dimerization is impaired 26 ., The FNR D154A variant can also acquire an iron-sulfur cluster under anaerobic conditions , but does not form monomers in the presence of O2 26 ., To mimic the behavior of these two FNR variants the interaction radius for FNR dimer formation was changed in the model ., Thus , the interaction distance for wild-type FNR monomers , which was initially set at 6 nm ( r3 , Table, 3 ) was increased to 2000 nm for the FNR D154A variant , essentially fixing the protein as a dimer , or decreased to 2 . 5 nm for the FNR I151A variant , making this protein predominantly monomeric under anaerobic conditions ., The results of simulations run under aerobic ( 217% aerobiosis ) and anaerobic conditions ( 0% aerobiosis ) suggested that under aerobic conditions wild-type FNR and FNR I151A should be unable to inhibit transcription from an FNR-repressed promoter ( i . e . the output from the reporter system is 100% ) , whereas FNR D154A should retain \u223c50% activity ( Table 6 ) ., Under anaerobic conditions , wild-type FNR was predicted to exhibit maximum repressive activity ( i . e . 0% reporter output ) , whereas FNR I151A and FNR D154A mediated slightly enhanced repression compared to the simulated aerobic conditions ( Table 6 ) ., To test the accuracy of these predictions , the ability of wild-type FNR , FNR I151A and FNR D154A to repress transcription of a synthetic FNR-regulated promoter ( FFgal\u03944 ) under aerobic and anaerobic conditions was tested 27 ., The choice of a synthetic FNR-repressed promoter was made to remove complications that might arise due to iron-sulfur cluster incorporation influencing the protein-protein interactions between FNR and RNA polymerase; in the reporter system chosen FNR simply occludes the promoter of the reporter gene and as such DNA-binding by FNR controls promoter activity ., The experimental data obtained matched the general response of the FNR variants in the simulation , but not very precisely for FNR D154A , with the experimental data indicating more severe repression by FNR D154A under both aerobic and anaerobic conditions than predicted ( Table 6 ) ., This suggested that the interaction radius ( r2\\u200a=\\u200a5 nm; Table 3 ) , which controls the binding of FNR to its DNA target required adjustment to enhance DNA-binding of the FNR D154A variant ., Therefore , the simulations were rerun after adjusting r2 to 7 nm for all the FNR proteins considered here ., The results of the simulations for both FNR variants now matched the experimental data well ( Table 6 ) ., However , it was essential to ensure that the adjustment to r2 did not significantly influence the model output for wild-type FNR ., Therefore , simulations of the behaviour of wild-type FNR at 31 , 85 , 115 and 217% aerobiosis were repeated using the adjusted r2 value of 7 nm ., The model output was very similar to those obtained when r2 was at the initial value of 5 nm ( Table 7 ) ., These analyses imply that for FNR D154A , which is essentially fixed in a dimeric state , the rate of binding to the target DNA governs transcriptional repression , but for wild-type FNR the upstream monomer-dimer transition is the primary determinant controlling the output from the reporter ., The FNR switch has been the subject of several attempts to integrate extensive experimental data into coherent models that account for changes in FNR activity and target gene regulation in response to O2 availability 15 , 28\u201331 ., These models have provided estimates of active and inactive FNR in E . coli cells exposed to different O2 concentrations and the dynamic behavior of the FNR switch ., The ability of FNR to switch rapidly between active and inactive forms is essential for it to fulfill its physiological role as a global regulator and the models are able to capture this dynamic behavior ., Thus , it is thought that the \u2018futile\u2019 cycling of FNR between inactive and active forms under aerobic conditions has evolved to facilitate rapid activation of FNR upon withdrawal of O2 and hence the physiological imperative for rapid activation has determined the structure of the FNR regulatory cycle 30 , 31 ., However , it is less clear from these approaches how the system avoids undesirable switching between active and inactive states at low O2 availabilities ( micro-aerobic conditions , >0%\u2013<100% AU ) ., To achieve rapid FNR response times it has been suggested that minimizing the range of O2 concentrations that constitute a micro-aerobic environment , from the viewpoint of FNR , is advantageous 31 ., Unlike previous models of the FNR switch , the agent-based model described here recognizes the importance of geometry and location in biology ., This new approach reveals that spatial effects play a role in controlling the inactivation of FNR in low O2 environments ., Consumption of O2 by terminal oxidases at the cytoplasmic membrane and reaction of O2 with the iron-sulfur clusters of FNR in the cytoplasm present two barriers to inactivation of FNR bound to DNA in the nucleoid , thereby minimizing exposure of FNR to micro-aerobic conditions by maintaining an essentially anaerobic cytoplasm for AU values up to \u223c85% ., It is suggested that this buffering of FNR response makes the regulatory system more robust by preventing large amplitude fluctuations in FNR activity when the bacteria are exposed to micro-aerobic conditions or experience environments in which they encounter short pulses of low O2 concentrations ., Furthermore , investigation of FNR variants with altered oligomerization properties suggested that the monomer-dimer transition , mediated by iron-sulfur cluster acquisition , is the primary regulatory step in FNR-mediated repression of gene expression ., It is expected that the current model will act as a foundation for future investigations , e . g . predicting the effects of adding or removing a class of agent to identify the significant regulatory components of the system ., Knowledge of the rate of O2 supply , , to the E . coli cells was required in order to simulate the response of the regulators of cydAB and cyoABCDE to different O2 availabilities ., Therefore , un-inoculated chemostat vessels were used to measure dissolved O2 concentrations , , as a function of the percentage O2 in the input gas , Pi , in the absence of bacteria ., This allowed the rate at which O2 dissolves in the culture medium to be calculated from the equation: , yielding =\\u200a5 . 898 \u00b5mol\/L\/min ., The number of O2 molecules distributed to a single bacterial cell was then calculated from the following equation: ( where , NA is the Avogadro constant ( 6 . 022\u00d71023 ) , Vcell is the volume of E . coli cell ( 0 . 3925 \u00b5m3 ) and as a constant for this equation , n ( 3 . 3\u00d710\u22129 ) includes the unit transformations , min to sec ( 60\u22121 ) and \u00b5mol to mol ( 10\u22126 ) , and the time unit represented by an iteration ( 0 . 2 sec ) ., In the model the individual agents ( Cyd , Cyo , ArcB , ArcA , FNR and O2 ) are able to move and interact within the confines of their respective locations in a 3-D-cylinder representing the E . coli cell ., To control the velocity of agents , the maximal distances they can move in 3-D space during one iteration ( step length ) were pre-defined ( Table 4 ) ., Thus , a step length is pre-defined in program header file ( . h ) and for each movement , this is multiplied by a randomly generated value within 0 , 1 to obtain a random moving distance , which in turn is directed towards a 3-D direction ( movement vector ) that was also randomly generated within defined spatial regions ., An example is shown in Figure 6 to illustrate the movements of an O2 molecule when it enters the cell ., Interactions between agents depend upon the biological rules governing their properties and being in close enough proximity to react ., The interaction radius of an agent encapsulates the 3-D space within which reactions occur ., As the interaction radii cannot be measured , they were first estimated on the basis of known biological properties ., For the radii r1\u20264 , r12 and r13 ( Table 3 ) , arbitrary values were initially set at 31% AU , and the model was then trained to match the experimental result for the number of FNR dimers at 31% AU ( Table 5 ) ., The modeled output of FNR dimer number at steady-state was compared with the experimental data , and the difference suggested re-adjustment of interaction radii ., The adjusted radii were then tested against the FNR dimer numbers at 85% , 115% and 217% AU ( Table 5 ) during model validation , and the results indicate that the interaction radii values are capable of describing the behavior of the system ., The interaction radii of Cyd and Cyo with O2 reflect their relative affinities for O2 ( i . e . Cyd has a high O2 affinity and thus reacts more readily , 7 nm interaction radius , than Cyo , which has a lower affinity for O2 , 3 nm interaction radius ) ., As , thus far , no accurate biological data is available for ArcBA system , the radii r5\u202611 were arbitrarily defined and were refined by training the model to match current biological expectations ., The rod-shaped E . coli cell was modeled as a cylinder ( 500 nm\u00d72000 nm ) 32 with the nucleoid represented as a sphere with a diameter of 250 nm at the centre of the cell ., The experimentally-based parameters and locations of the agents in their initial state are listed in Table 2 ., As the number of ArcB molecules has not been determined experimentally , this value was arbitrarily assigned ( see above ) ., The interaction rules for the agents are shown in Table 3 ( additional descriptions of an exemplar agent ( O2 ) and the rules for ArcBA and FNR are provided in , Table S1 and Text S1 ) ., These rules , combined with the interaction radii , determine the final status of the system ., The scale of the model is such that high performance computers are required to implement it , and the flexible agent-based supercomputing framework , FLAME ( http:\/\/www . flame . ac . uk ) acted as the framework to enable the simulation 33 , 34 ., For more information on FLAME see Figure S2 and Text S2 ., Plasmids encoding the FNR variants were constructed by site-directed mutagenesis ( Quikchange , Agilent ) of pGS196 , which contains a 5 . 65 kb fragment of wild-type fnr ligated into pBR322 35 ., The three isogenic plasmids pGS196 ( FNR ) , pGS2483 ( FNR I151A ) and pGS2405 ( FNR D154A ) were used to transform E . coli JRG4642 ( an fnr lac mutant strain ) containing a pRW50-based reporter plasmid carrying the lac-operon under the control of the FFgal\u03944 promoter 27 ., \u03b2-Galactosidase assays were carried out as described previously on strains grown in LBK medium at pH 7 . 2 containing 20 mM glucose 36 , 37 ., Cultures were grown either aerobically ( 25 ml culture in a 250 ml flask at 250 rpm agitation with 1\u2236100 inoculation ) or anaerobically ( statically in a fully sealed 17 ml tube with 1\u223650 inoculation ) ., Cultures ( three biological replicates ) were grown until mid-exponential phase ( OD600\\u200a=\\u200a0 . 35 ) before assaying for \u03b2-galactosidase activity .","headings":"Introduction, Results\/Discussion, Methods","abstract":"In the presence of oxygen ( O2 ) the model bacterium Escherichia coli is able to conserve energy by aerobic respiration ., Two major terminal oxidases are involved in this process - Cyo has a relatively low affinity for O2 but is able to pump protons and hence is energetically efficient; Cyd has a high affinity for O2 but does not pump protons ., When E . coli encounters environments with different O2 availabilities , the expression of the genes encoding the alternative terminal oxidases , the cydAB and cyoABCDE operons , are regulated by two O2-responsive transcription factors , ArcA ( an indirect O2 sensor ) and FNR ( a direct O2 sensor ) ., It has been suggested that O2-consumption by the terminal oxidases located at the cytoplasmic membrane significantly affects the activities of ArcA and FNR in the bacterial nucleoid ., In this study , an agent-based modeling approach has been taken to spatially simulate the uptake and consumption of O2 by E . coli and the consequent modulation of ArcA and FNR activities based on experimental data obtained from highly controlled chemostat cultures ., The molecules of O2 , transcription factors and terminal oxidases are treated as individual agents and their behaviors and interactions are imitated in a simulated 3-D E . coli cell ., The model implies that there are two barriers that dampen the response of FNR to O2 , i . e . consumption of O2 at the membrane by the terminal oxidases and reaction of O2 with cytoplasmic FNR ., Analysis of FNR variants suggested that the monomer-dimer transition is the key step in FNR-mediated repression of gene expression .","summary":"The model bacterium Escherichia coli has a modular electron transport chain that allows it to successfully compete in environments with differing oxygen ( O2 ) availabilities ., It has two well-characterized terminal oxidases , Cyd and Cyo ., Cyd has a very high affinity for O2 , whereas Cyo has a lower affinity , but is energetically more efficient ., Expression of the genes encoding Cyd and Cyo is controlled by two O2-responsive regulators , ArcBA and FNR ., However , it is not clear how O2 molecules enter the E . coli cell and how the locations of the terminal oxidases and the regulators influence the system ., An agent-based model is presented that simulates the interactions of O2 with the regulators and the oxidases in an E . coli cell ., The model suggests that O2 consumption by the oxidases at the cytoplasmic membrane and by FNR in the cytoplasm protects FNR bound to DNA in the nucleoid from inactivation and that dimerization of FNR in response to O2 depletion is the key step in FNR-mediated repression ., Thus , the focus of the agent-based model on spatial events provides information and new insight , allowing the effects of dysregulation of system components to be explored by facile addition or removal of agents .","keywords":"systems biology, computer and information sciences, network analysis, regulatory networks, biology and life sciences, computational biology","toc":null} +{"Unnamed: 0":1853,"id":"journal.pcbi.1006171","year":2018,"title":"Thalamocortical and intracortical laminar connectivity determines sleep spindle properties","sections":"Sleep marks a profound change of brain state as manifested by the spontaneous emergence of characteristic oscillatory activities ., In humans , sleep spindles consist of waxing-and-waning bursts of field potentials oscillating at 11\u201315 Hz lasting for 0 . 5\u20133 s and recurring every 5\u201315 s ., Experimental and computational studies have identified that both the thalamus and the cortex are involved in the generation and propagation of spindles ., Spindles are known to occur in isolated thalamus after decortication in vivo and in thalamic slice recordings in vitro 1 , 2 , demonstrating that the thalamus is sufficient for spindle generation ., In in-vivo conditions , the cortex has been shown to be actively involved in the initiation and termination of spindles 3 as well as the long-range synchronization of spindles 4 5 ., Multiple lines of evidence indicate that spindle oscillations are linked to memory consolidation during sleep ., Spindle density is known to increase following training in hippocampal-dependent 6 as well as procedural memory 7 memory tasks ., Spindle density also correlates with better memory retention following sleep in verbal tasks 8 , 9 ., More recently , it was shown that pharmacologically increasing spindle density leads to better post-sleep performance in hippocampal-dependent learning tasks 10 ., Furthermore , spindle activity metrics , including amplitude and duration , were predictive of learning performance 11\u201313 , suggesting that spindle event occurrence , amplitude , and duration influence memory consolidation ., In human recordings , spindle occurrence and synchronization vary based on the recording modality ., Spindles recorded with magnetoencephalography ( MEG ) are more frequent and less synchronized , as compared to those recorded with electroencephalography ( EEG ) 14 ., It has been proposed that the contrast between MEG and EEG spindles reflects the differential involvement of the core and matrix thalamocortical systems , respectively 15 ., Core projections are focal to layer IV , whereas matrix projections are widespread in upper layers 16 ., This hypothesis is supported by human laminar microelectrode data which demonstrated two spindle generators , one associated with middle cortical layers and the other superficial 17 ., Taken together , these studies suggest that there could be two systems of spindle generation within the cortex and that these correspond to the core and matrix anatomical networks ., However , the network and cellular mechanisms whereby the core and matrix systems interact to generate both independent and co-occurring spindles across cortical layers are not understood ., In this study , we developed a computational model of thalamus and cortex that replicates known features of spindle occurrence in MEG and EEG recordings ., While our previous efforts have been focused on the neural mechanisms involved in the generation of isolated spindles5 , in this study we identified the critical mechanisms underlying the spontaneous generation of spindles across different cortical layers and their interactions ., Histograms of EEG and MEG gradiometer inter-spindle intervals are shown in Fig 1C ., For neither channel type are ISIs distributed normally as determined by Lilliefors tests ( D2571 = 0 . 1062 , p = 1 . 0e-3 , D4802 = 0 . 1022 , p = 1 . 0e-3 ) , suggesting that traditional descriptive statistics are of limited utility ., However , the ISI at peak of the respective distributions is longer for EEG than it is the MEG ., In addition , a two-sample Kolmogorov-Smirnov test confirms that EEG and MEG ISIs are not drawn from the same distribution ( D2571 , 4802 = 0 . 079 , p = 1 . 5e-9 ) ., While the data where not found to be drawn from any parametric distribution with 95% confidence , an exponential fit ( MEG ) and lognormal fit ( EEG ) are shown in red overlay for illustrative purposes ., These data are consistent with previous empirical recordings 18 and suggest that sleep spindles have different properties across superficial vs . deep cortical layers ., To investigate the mechanisms behind distinct spindle properties across cortical locations as observed in EEG and MEG signals , we constructed a model of thalamus and cortex that incorporated the two characteristic thalamocortical systems: core and matrix ., These systems contained distinct thalamic populations that projected to the superficial ( matrix ) and middle ( core ) cortical layers ., Four cell types were used to model distinct cell populations: thalamocortical relay ( TC ) and reticular ( RE ) neurons in the thalamus , and excitatory pyramidal ( PY ) and inhibitory ( IN ) neurons in each of three layers of the cortical network ., A schematic representation of the synaptic connections and cortical geometry of the network model is shown in Fig 2 ., In the matrix system , both thalamocortical ( from matrix TCs to the apical dendrites of layer 5 pyramidal neurons ( PYs ) located in the layer 1 ) and corticothalamic synapses ( from layer 5 PYs back to the thalamus ) formed diffuse connections ., The core system had a focal connection pattern in both thalamocortical ( from core TCs to PYs in the layer III\/IV ) and corticothalamic ( from layer VI PYs to the thalamus ) projections ., Because spindles recorded in EEG signal reflect the activity of superficial layers while MEG records spindles originating from deeper layers ( Fig 1 and 19 ) , we compared the activity of the model\u2019s matrix system , which has projections to the superficial layers , to empirical EEG recordings and compared the activity in model layer 3\/4 to empirical MEG recordings ., In agreement with our previous studies 3 , 5 , 20 , 21 , simulated stage 2 sleep consisted of multiple spindle events involving thalamic and cortical neuronal populations ( Fig 3 ) ., During one such typical spindle event ( highlighted by the box in Fig 3A and 3B ) , cortical and thalamic neurons in both the core and matrix system had elevated and synchronized firing ( Fig 3A bottom ) , consistent with previous in-vivo experimental recordings 22 ., In the model , spindles within each system were initiated from spontaneous activity within cortical layers and then spread to thalamic neurons , similar to our previous study5 ., The spontaneous activity due to miniature EPSPs in glutamergic cortical synapses led to fluctuations in membrane voltage and sparse firing ., At random times , the miniature EPSPs summed such that a small number of locally connected PY neurons spiked within a short window ( <100ms ) , which then induced spiking in thalamic cells through corticothalamic connections ., This initiated spindle oscillations in the thalamic population mediated by TC-RE interactions as described before 20 , 23 , 24 ., Thalamic spindles in turn propagated to the neocortex leading to joint thalamocortical spindle events whose features were shaped by the properties of thalamocortical and corticothalamic connections ., In this study , we examined how the process of spindle generation occurs in a thalamocortical network with mutually interacting core and matrix systems , wherein the thalamic network of each system is capable of generating spindles independently ., Based on the anatomical data 16 , the main difference between the modeled core and matrix systems was the radii or fanout of connections in thalamocortical and corticothalamic projections ( in the baseline model , the fanout was 10 times wider for the matrix compared to the core system ) ., Furthermore , the strength of each synaptic connection was scaled by the number of input connections to each neuron 25 , 26 , leading to weaker individual thalamocortical projections in the matrix as compared to the core ., These differences in the strength and fanout of thalamocortical connectivity resulted in distinctive core and matrix spindle properties ( see Fig 3A , right vs left ) ., First , both cortical and thalamic spindles were more spatially focal in the core system as only a small subset of neurons was involved in a typical spindle event at any given time ., In contrast , within the matrix system spindles were global ( involving the entire cell population ) and highly synchronous across all cell types ., These results are consistent with our previous studies 5 and suggest that the connectivity properties of thalamocortical projections determine the degree of synchronization in the cortical network ., Second , spindle density was higher in the core system compared to the matrix system ., At every spatial location in the cortical network of the core system , the characteristic time between spindles was shorter compared to that between spindles in the matrix system ( Fig 3A left vs right ) ., In order to quantify the spatial and temporal properties of spindles , we computed an estimated LFP as an average of the dendritic synaptic currents for every group of contiguous 100 cortical neurons ., LFPs of the core system were estimated from the currents generated in the dendrites of layer 3\/4 neurons while the LFP of the matrix system was computed from the dendritic currents of layer 5 neurons , located in the superficial cortical layers ( Fig 2 ) ., After applying a bandpass filter ( 6\u201315 Hz ) , the spatial properties of estimated core and matrix LFP ( Fig 3C ) closely matched the MEG and EEG recordings , respectively ( Fig 1 ) ., In subsequent analyses , we used this estimated LFP to further examine the properties of the spindle oscillations in the core and matrix systems ., We identified spindles in the estimated LFP using an automated spindle detection algorithm similar to that used in experimental studies ( details are provided in the method section ) ., The spindle density , defined as the number of spindles occurring per minute of simulation time , was greater in the core compared to the matrix ( Fig 4A ) as confirmed by an independent-sample t-test ( t ( 18 ) = 7 . 06 , p<0 . 001 for across estimated LFP channels and t ( 2060 ) = 19 . 2 , p<0 . 001 across all spindles ) ., The results of this analysis agree with the experimental observation that MEG spindles occur more frequently than EEG spindles ., While the average spindle density was significantly different between the core and matrix , in both systems the distribution of inter-spindle intervals peaks below 4 seconds and has a long tail ( Fig 4B ) ., A two sample KS test comparing the distributions of inter-spindle intervals confirmed that the intervals were derived from different distributions ( D1128 , 932 = 0 . 427 , p<0 . 001 ) ., The peak ISI of the core was shorter than that of the matrix system , suggesting that the core network experiences shorter and more frequent quiescence periods than the matrix population ., Furthermore , maximum-likelihood fits of the probability distributions ( red line in Fig 4B ) confirmed that the intervals of spindle occurrence cannot be described by a normal distribution ., The long tails of the distributions suggest that a Poisson like process , as oppose to a periodic process , is responsible for spindle generation ., This observation is consistent with previous experimental results 18 , 27 and suggests that our computational model replicates essential statistical properties of spindles observed in in vivo experiments ., Several other features of simulated core and matrix spindles were similar to those found in experimental recordings ., The average spindle duration was significantly higher in the core compared to the matrix system ( Fig 4C ) as confirmed by independent-sample t-test ( t ( 2060 ) = 16 . 3 , p<0 . 001 ) ., To quantify the difference in the spatial synchrony of spindles between the core and matrix systems , we computed the spatial correlation 28 between LFP groups at different distances ( measured by the location of a neuron group in the network ) ., The correlation strength decreased with distance for both systems ( Fig 4D ) ., However , the spindles in the core system were less spatially correlated overall when compared to spindles in the matrix system ., Simultaneous EEG and MEG measurements have found that about 50% of MEG spindles co-occur with EEG spindles , while about 85% of EEG spindles co-occur with MEG spindles 29 ., Further , a spindle detected in the EEG signal is found to co-occur with about 66% more MEG channels than a spindle detected in MEG ., Our model generates spindling patterns consistent with these features ., The co-occurrence probability revealed that during periods of spindles in the matrix system , there was about 80% probability that core was also generating spindles ( Fig 4E ) ., In contrast , there was only a 40% probability of observing a matrix spindle during a core system spindle ., An independent-sample t-test confirmed this difference between the systems across estimated LFP channels ( t ( 14 ) = 31 . 4 , p<0 . 001 ) ., Furthermore , we observed that the number of LFP channels that were simultaneously activated during a spindle event in the core system was higher when a spindle co-occurred in the matrix versus times when the spindles only occurred in the core ( Fig 4F , t ( 14 ) = 67 . 2 , p<0 . 001 ) ., This suggests that the co-occurrences of spindles in both systems are rare events but lead to the wide spread activation in both the core and matrix when they take place ., Finally , we examined the delay between spindles in the core and matrix systems ( Fig 4G ) ., We observed that on average ( red line in Fig 4G ) , the spindle originated from the core system then spread to the matrix system with a mean delay of about 300 ms ( delay was measured as the difference in onset times between co-occurring spindles within a window of 2 , 500 ms; negative delay values indicate spindles in which the core preceded matrix ) ., The peak at -750 ms corresponds to spindles originating from the core system , while the peak at +750 ms suggests that at some network sites , spindles originated in the matrix system and then spread to the core system ., While there were almost no events in which the matrix preceded the core by more than 1 sec ( right of Fig 4G ) , many events occurred in which the core preceded the matrix by more than 1 sec ( left of Fig 4G ) ., In sum , these results suggest that spindles were frequently initiated locally in the core system , then propagate to and spread throughout the matrix system ., This can trigger spindles at the other locations of the core , so eventually , even regions in the core system that were not previously involved become recruited ., These findings explain the experimental result that spindles are observed in more MEG channels when they also co-occur in the EEG 29 ., We leveraged our model to examine factors that may influence spindle occurrence across cortical layers ., The main difference between the core and matrix systems in the model was the breadth or fanout of the thalamic projections to the cortical network ., Neuroanatomical studies suggest that the core system has focused projections while matrix system projects widely 16 ., Here , we assessed the impacts of this characteristic by systematically varying the connection footprint of the thalamic matrix to superficial cortical regions , while holding the fanout of the thalamic core to layer 3\/4 projections constant ., We also modulated the corticothalamic projections in proportion to the thalamocortical projections ., Using the estimated LFP from the cortical layers corresponding to core and matrix system , respectively , we quantified various spindle properties as the fanout was modulated ., Spindle density ( the number of spindles per minute ) in both layers was sensitive to the matrix system\u2019s fanout ., ANOVA confirmed significant effects of fanout and layer location , as well as an interaction between layer and fanout ( fanout: F ( 6 , 112 ) = 66 . 4; p<0 . 01 , Layer: F ( 1 , 112 ) = 65 . 18; p<0 . 01 and interaction F ( 6 , 112 ) = 22 . 8; p<0 . 01 ) ., When the matrix and core thalamus had similar fanouts ( ratio 1 and 2 . 5 in Fig 5B ) , we observed a slightly higher density of spindles in the matrix than in the core system ., This observation is consistent with the properties of these circuits ( see Fig 2 ) , wherein the matrix system contains direct reciprocal projections connecting cortical and thalamic subpopulations and the core system routes indirect projections from cortical ( layer III\/IV ) neurons through layer VI to the thalamic nucleus ., When the thalamocortical fanout of the matrix system was increased to above ~5 times the size of the core system , the density of spindles in the matrix system was reduced to around 4 spindles per minute ., Interestingly , the density of spindles in the core system was also reduced when the thalamocortical fanout of the matrix system was further increased to above ~10 times of that in the core system ( ratio above 10 in Fig 5B ) ., This suggests that spindle density in both systems is determined not only by the radius of thalamocortical vs . corticothalamic projections , but also by interactions between the systems among the cortical layers ., We further expound on the role of these cortical connections in the next section ., We also examined the effect of thalamocortical fanout on the distribution of inter-spindle intervals ( Fig 5C ) ., Although the mean value was largely independent of the projection radius , a long tailed distribution was observed for all values of fanout in the core ., Contrastingly , in the matrix system the mean and peak of the inter-spindle interval shifted to the right ( longer intervals ) with increased fanout ., With large fanouts , the majority of matrix system spindles had very long periods of silence ( 10-15s ) between them ., This suggests that thalamocortical fanout determines the peak of the inter-spindle interval distribution , but does not alter the stochastic nature of spindle occurrence ., The degree of thalamocortical fanout also influenced the co-occurrence of spindles in the core and matrix systems ( Fig 5D ) ., Increasing the fanout of the matrix system reduced spindle co-occurrence between two systems ., This reduction resulted mainly from lower spindle density in both layers ., However , the co-occurrence of core spindles during matrix spindles was higher for all values of fanout when matrix thalamocortical projections were at least 5 times broader than core projections ., This suggests that the difference in spindle co-occurrence between EEG and MEG as observed in experiments 14 depends mainly on the difference in the radius of thalamocortical projections between the core and matrix systems , while overall level of co-occurrence is determined by the interaction between cortical layers ., We examined how spatial correlations during periods of spindles vary depending on the fanout of thalamocortical projections ., The spatial correlation quantifies the degree of synchronization in the estimated LFP signals of network locations as a function of the distance between them ., As expected , increasing the distance reduced the spatial correlation ( Fig 4D ) ., We next measured the mean value of the spatial correlation for each fanout condition ., The mean correlation increased as a function of the fanout in the matrix system ( Fig 5A ) ., However , the spatial correlation within the core , and between the core and matrix systems , did not change with increases in the fanout , suggesting that the spatial synchronization of core spindles is largely influenced by thalamocortical fanout but not by interactions between the core and matrix systems as was observed for spindle density ., Does intra-cortical excitatory connectivity between layer 3\/4 of the core system and layer 5 of the matrix system affect spindle occurrence ?, To answer this question , we first varied the strength of excitatory connections ( AMPA and NMDA ) from the core to matrix pyramidal neurons ( Fig 6A and 6B ) ., Here the reference point ( or 100% ) corresponds to the strength used in previous simulations , i . e . half the strength of a within-layer connection ., The spindle density varied with the strength of the interlaminar connections ( Fig 6A ) ., For low connectivity strengths ( below 100% ) , the spindle density of the matrix system was reduced significantly , while at high strengths ( above 140% ) the matrix system spindle density exceeded that of the control ( 100% ) ., There were significant effects of connection strength and layer on the spindle density , as well as an interaction between the two factors ( connection strength: F ( 5 , 96 ) = 24 . 7; p<0 . 01 , layer: F ( 5 , 96 ) = 386 . 6; p<0 . 01 and interaction F ( 5 , 96 ) = 36 . 9; p<0 . 01 ) that suggests a layer-specific effect of modulating excitatory interlaminar connection strength ., Similar to the spindle density , spindle co-occurrence between the core and matrix systems also increased as a function of interlaminar connection strength , reaching 80% for the both core and matrix at 150% connectivity ., In contrast , changing the strength of excitatory connections from layer 5 to layer 3\/4 had little effect on the spindle density , ( Fig 6C ) ., Taken together , these results suggest that the strength of the cortical core-to-matrix excitatory connections is one of the critical factors in determining spindle density and co-occurrence among spindles across both cortical lamina and the core\/matrix systems ., Using computational modeling and data from EEG\/MEG recordings in humans we found that the properties of sleep spindles vary across cortical layers and are influenced by thalamocortical , corticothalamic and cortico-laminar connections ., This study was motivated by empirical findings demonstrating that spindles measured in EEG have different synchronization properties from those measured in MEG 14 , 29 ., EEG spindles occur less frequently and more synchronously in comparison to MEG spindles ., Our new study confirms the speculation that anatomical differences between the matrix thalamocortical system , which has broader projections that target the cortex superficially , and the core system , which consists of focal projections which target the middle layers , can account for the differences between EEG and MEG signals ., Furthermore , we discovered that the strength of corticocortical feedforward excitatory connections from the core to matrix neurons determines the spindle density in the matrix system , which predicts a specific neural mechanism for the interactions observed between MEG and EEG spindles ., There were several novel findings in this study ., First , we developed a novel computational model of sleep spindling in which spindles manifested as a rare but global synchronous occurrence in the matrix pathway and a frequent but local occurrence in the core pathway ., In other words , many spontaneous spindles occurred locally in the core system but only occasionally did this lead to globally organized spindles appearing in the matrix system ., As a result , only a fraction of spindles co-occurred between the pathways ( about 80% in matrix and 40% in core pathway ) ., This is consistent with data reported for EEG vs MEG in vivo ( Fig 1 ) ., In contrast , in our previous models 3 , 5 , spindles were induced by external stimulation and always occurred simultaneously in the core and matrix systems , but with different degrees of internal synchrony ., In addition , these studies did not examine how the core and matrix pathways interact during spontaneously occurring spindles ., Second , in this study we found that the distribution of the inter-spindle intervals between spontaneously occurring spindles in both the core and matrix pathways had long tails similar to a log-normal distribution ., This result is consistent with analyses of MEG and EEG data reported in this study and in our prior study 18 ., In our previous models 3 , 5 , spindles were induced by external stimulation and the statistics of spontaneously occurring spindles could not be explored ., Third , we demonstrated that the strength of thalamocortical and corticothalamic connections determined the density and occurrence of spontaneously generated spindles ., The spindle density was higher in the core pathway as compared to the matrix pathway with high co-occurrence of core spindles with matrix spindles ., These findings were corroborated with experimental evidence from EEG\/MEG recordings ., Finally , we reported that laminar connections between the core and matrix could be a significant factor in determining spindle density , suggesting a possible mechanism of learning ., When the strength of these connections was increased in the model , there was a significant increase in spindle occurrence , similar to the experimentally observed increase in spindle density following recent learning 10 ., The origin of sleep spindle activity has been linked to thalamic oscillators based on a broad range of experimental studies 2 , 30 , 31 ., The excitatory and inhibitory connections between thalamic relay and reticular neurons are critical in generating spindles 20 , 23 , 32 , 33 ., However , in intact brain , the properties of sleep spindles are also shaped by cortical networks ., Indeed , the onset of a spindle oscillation and its termination are both dependent on cortical input to the thalamus 3 , 34 , 35 ., In model studies , spindle oscillations in the thalamus are initiated when sufficiently strong activity in the cortex activates the thalamic network , and spindle termination is partially mediated by the desynchronization of corticothalamic input towards the end of spindles 3 , 32 ., However , in simultaneous cortical and thalamic studies in humans , thalamic spindles were found to be tightly coupled to a preceding downstate , which in turn was triggered by converging cortical downstates 36 ., Further modeling is required to reconcile these experimental results ., In addition , thalamocortical interactions are known to be integral to the synchronization of spindles 5 , 33 ., In our new study , the core thalamocortical system revealed relatively high spindle density produced by focal and strong thalamocortical and corticothalamic projections ., Such a pattern of connectivity between core thalamus and middle cortical layers allowed input from a small region of the cortex to initiate and maintain focal spindles in the core system ., In contrast , the matrix system had relatively weak and broad thalamocortical connections requiring synchronized activity in broader cortical regions in order to initiate spindles in the thalamus ., We previously reported 5 that ( 1 ) within a single spindle event the synchrony of the neuronal firing is higher in the matrix than in the core system; ( 2 ) spindle are initiated in the core and with some delay in the matrix system ., The overal density of core and matrix spindle events was , however , the same in these earlier models ., In the new study we extended these previous results by explaining differences in the global spatio-temporal structure of spindle activity between the core and matrix systems ., Our new model predicts that the focal nature of the core thalamocortical connectivity can explain the more frequent occurrence of spindles in the core system as observed in vivo ., The strength of core-to-matrix intracortical connections determined the probability of core spindles to \u201cpropagate\u201d to the matrix system ., In our new model core spindles remained localized and have never involved the entire network , again in agreement with in vivo data ., We observed that the distribution of inter-spindle intervals reflects a non-periodic stochastic process such as a Poisson process , which is consistent with previous data 18 , 27 ., The state of the thalamocortical network , determined by the level of the intrinsic and synaptic conductances , contributed to the stochastic nature of spindle occurrence ., Building off our previous work 21 , we chose the intrinsic and synaptic properties in the model that match those in stage 2 sleep , a brain state when concentrations of acetylcholine and monoamines are reduced 37\u201339 ., As a consequence , the K-leak currents and excitatory intracortical connections were set higher than in an awake-like state due to the reduction of acetylcholine and norepinephrine 40 ., The high K-leak currents resulted in sparse spontaneous cortical firing during periods between spindles with occasional surges of local synchrony sustained by recurrent excitation within the cortex that could trigger spindle oscillations in the thalamus ., Note that this mechanism may be different from spindle initiation during slow oscillation , when spindle activity appears to be initiated during Down state in thalamus 35 ., Furthermore , the release of miniature EPSPs and IPSPs in the cortex was implemented as a Poission process that contributed to the stochastic nature of the baseline activity ., All these factors led to a variable inter-spindle interval with long periods of silence when activity in the cortex was not sufficient to induce spindles ., While it is known that an excitable medium with noise has a Poisson event distribution in reduced systems 41 , here we show that a detailed biophysical model of spindle generation may lead to a Poission process due to specific intrinsic and network properties ., Layer IV excitatory neurons have a smaller dendritic structure compared to Layer V excitatory neurons 42 ., Direct recordings and detailed dendritic reconstructions have shown large post-synaptic potentials in layer IV due to core thalamic input 42 , 43 ., We examined the role of thalamocortical and corticothalamic connections in a thalamocortical network with only one cortical layer ( S1 Fig ) ., We found that increasing the synaptic strength of thalamocortical and corticothalamic connections both increased the density and duration of spindles , however it did not influence their synchronization ( S1A Fig ) ., In contrast , changing fanout led to an increase in spindle density , duration , and synchronization ., Furthermore , we examined the impact of thalamocortical and corticothalamic connections individually without applying a synaptic normalization rule ( see Methods ) ., We observed that the thalamocortical connections had a higher impact on spindle properties than corticothalamic connections ( S1B Fig ) ., In our full model with multiple layers , which included a weight normalization rule and wider fanout of the matrix pathway ( based on experimental findings16 ) , the synaptic strength of each thalamocortical synapse in the core pathway was higher than that in the matrix pathway ., The exact value of the synaptic strength was chosen from the reduced model to match experimentally observed spindle durations , as observed in EEG\/MEG and laminar recordings 17 ., The simultaneous EEG and MEG recordings reported here and in our previous publications 14 , 29 revealed that, ( a ) MEG spindles occur earlier compared to the EEG spindles and, ( b ) EEG spindles are seen in a higher number of the MEG sensors compared to the spindles occurring only in the MEG recordings ., This resembles our current findings , in which the number of regions that were spindling in the core system during a matrix spindle was higher than when there was no spindle in the matrix system ., Further , the distribution of spindle onset delays between the systems indicates that during matrix spindles some neurons of the core system fired early , and presumably contributed to the initiation of the matrix spindle , while others fired late and were recruited ., Taken together , all the evidence suggests a characteristic and complex spatiotemporal evolution of spindle activity during co-occurring spindles , where spindles in the core spread to the matrix and in turn activate wider regions in the core leading to synchronized activation across cortical layers that is reflected by strong activity in both EEG and MEG ., Thus , the model predicts that co-occurring spindles could lead to the recruitment of the large cortical areas , which indeed has been reported in previous studies 28 , 44 ., At the same time , local spindles occurring in the model within deep cortical l","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Sleep spindles are brief oscillatory events during non-rapid eye movement ( NREM ) sleep ., Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance , suggesting spindle involvement in memory consolidation ., Here , using computational models , we identified network mechanisms that may explain differences in spindle properties across cortical structures ., First , we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems ., The matrix system , projecting superficially , has wider thalamocortical fanout compared to the core system , which projects to middle layers , and requires the recruitment of a larger population of neurons to initiate a spindle ., This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers , as observed in the EEG signal ., In contrast , spindles in the core system occurred more frequently but less synchronously , as observed in the MEG recordings ., Furthermore , consistent with human recordings , in the model , spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles ., We also found that the intracortical excitatory connections from layer III\/IV to layer V promote spindle propagation from the core to the matrix system , leading to widespread spindle activity ., Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning .","summary":"The density of sleep spindles has been shown to correlate with memory consolidation ., Sleep spindles occur more often in human MEG than EEG recordings ., We developed a thalamocortical network model that is capable of spontaneous generation of spindles across cortical layers and that captures the essential statistical features of spindles observed empirically ., Our study predicts that differences in thalamocortical connectivity , known from anatomical studies , are sufficient to explain the differences in the spindle properties between EEG and MEG which are observed in human recordings ., Furthermore , our model predicts that intracortical connectivity between cortical layers , a property influenced by sleep preceding learning , increases spindle density ., Results from our study highlight the role of intracortical and thalamocortical projections on the occurrence and properties of spindles .","keywords":"learning, medicine and health sciences, sleep, brain electrophysiology, brain, electrophysiology, social sciences, neuroscience, learning and memory, physiological processes, clinical medicine, cognitive psychology, brain mapping, network analysis, bioassays and physiological analysis, neuronal dendrites, neuroimaging, electroencephalography, research and analysis methods, computer and information sciences, imaging techniques, clinical neurophysiology, animal cells, electrophysiological techniques, thalamus, cellular neuroscience, psychology, cell biology, anatomy, physiology, neurons, biology and life sciences, cellular types, magnetoencephalography, cognitive science, neurophysiology","toc":null} +{"Unnamed: 0":430,"id":"journal.pcbi.1000425","year":2009,"title":"Dynamic Modeling of Vaccinating Behavior as a Function of Individual Beliefs","sections":"In the UK , MMR vaccine uptake started to decline after a controversial study linking MMR vaccine to autism 6 ., In a decade , vaccine coverage went well below the target herd immunity level of 95% ., Despite the confidence of researchers and most health professionals on the vaccine safety , the confidence of the public was deeply affected ., In an attempt to find ways to restore this confidence , several studies were carried out to identify factors associated with parents unwillingness to vaccinate their children ., They found that \u2018Not receiving unbiased and adequate information from health professionals about vaccine safety\u2019 and \u2018medias adverse publicity\u2019 were the most common reasons influencing uptake 7 ., Other important factors were: \u2018lack of belief in information from the government sources\u2019; \u2018fear of general practitioners promoting the vaccine for personal reasons\u2019; and \u2018media scare\u2019 ., Note that during this period the risk of acquiring measles was very low due to previously high vaccination coverage ., Sylvatic yellow fever ( SYF ) is a zoonotic disease , endemic in the north and central regions of Brazil ., Approximately 10% of infections with this flavivirus are severe and result in hemorrhagic fever , with case fatality of 50% 8 ., Since the re-introduction of A . aegypti in Brazil ( the urban vector of dengue and yellow fever ) , the potential reemergence of urban yellow fever is of concern 9 ., In Brazil , it is estimated that approximately 95% of the population living in the yellow fever endemic regions have been vaccinated ., In this area , small outbreaks occur periodically , especially during the rainy season , and larger ones are observed every 7 to 10 years 10 , in response to increased viral activity within the environmental reservoir ., In 2007 , increased detection of dead monkeys in the endemic zone , led the government to implement vaccine campaigns targeting travellers to these areas and the small fraction of the resident population who were still not protected by the vaccine ., The goal was to vaccinate 10\u201315% of the local population ., Intense notification in the press regarding the death of monkeys near urban areas , and intense coverage of all subsequent suspected and confirmed human cases and death events led to an almost country-wide disease scare ( Figure 1 ) , incompatible with the real risks 5 , which caused serious economic and health management problems , including waste of doses with already immunized people ( 60% of the population was vaccinated when only 10\u201315% would be sufficient ) , adverse events from over vaccination ( individuals taking multiple doses to \u2018guarantee\u2019 protection ) , national vaccine shortage and international vaccine shortage , since Brazil stopped exporting YF vaccine to supply domestic vaccination rush ( www . who . int\/csr\/don\/2008_02_07\/en\/ ) ., The importance of public perceptions and collective behavior for the outcome of immunization campaigns are starting to be acknowledged by theoreticians 9 , 11 , 12 ., These factors have been examined in a game theoretical framework , where the influence of certain types of vaccinating behaviour on the stability and equilibria of epidemic models is analyzed ., In the present work , we propose a model for individual immunization behavior as an inference problem: Instead of working with fixed behaviors , we develop a dynamic model of belief update , which in turn determines individual behavior ., An individuals willingness to vaccinate is derived from his perception of disease risk and vaccine safety , which is updated in a Bayesian framework , according the epidemiological facts each individual is exposed to , in their daily life ., We also explore the global effects of individual decisions on vaccination adherence at the population level ., In summary , we propose a framework to integrate dynamic modeling of learning ( belief updating ) with decision and population dynamics ., We ran the model as described above for 100 days with parameters given by Table 1 , under various scenarios to reveal the interplay of belief and action under the proposed model ., Figures 2 and 3 show a summary output of the model dynamics under contrasting conditions ., In Figure 2 , we have VAE ( Vaccine adverse events ) preceding the occurrence of severe disease events ., As expected , VAE become the strongest influence on , keeping low with consequences to the attained vaccination coverage at the end of the simulation ., We characterize this behavior as a \u2018vaccine scare\u2019 behavior ., In a different scenario , Figure 3 , we observe the effect of severe disease events occurring in high frequency at the beginning of the epidemics ., In this case , disease scare pushes willingness to vaccinate ( ) to high levels ., This is very clear in Figure 3 where there is a cluster of serious disease cases around the 30th day of simulation ., right after the occurrence of this cluster , we see rise sharply above , meaning that willingness to vaccinate ( ) in this week was mainly driven by disease scare instead of considerations about vaccine safety ( ) ., A similar effect can be observed in Figure 2 , starting from day 45 or so ., Only here the impact of a cluster of serious disease cases is diminished by the effects of VAEs , and the fact that there arent many people left to make the decision of wether or not vaccinate ., The impact of individual beliefs on vaccine coverage is highly dependent on the visibility of the rare VAE ., Figure 4 shows the impact of the media amplification factor on and vaccination coverage after \u224814 weeks , for a infectious disease with and ., If no media amplification occurs , willingness to vaccinate and vaccine coverage are high , as severe disease events are common and severe adverse events are relatively rare ., As vaccine adverse events are amplified by the media , individuals willingness to vaccinate at the end of the 14 weeks tend to decrease ., Such belief change , however , has a low impact on the vaccine coverage ., The explanation for this is that vaccine coverage is a cumulative measure and , when VAE appear , a relatively large fraction of the population had already been vaccinated ., These results suggest that VAE should not strongly impact the outcome of an ongoing mass vaccination campaign , although it could affect the success of future campaigns ., Fixing amplification at and , we investigated how ( at the end of the simulation ) and vaccine coverage would be affected by increasing the rate of vaccine adverse events , ( Figure 5 ) ., As increases above , willingness to vaccinate drops quickly , while vaccine coverage diminishes but slightly ., In the present world of mass media channels and rapid and inexpensive communications , the spread of information , independent of its quality , is very effective , leading to considerable uncertainty and heterogeneity in public opinions ., The yellow fever scare in Brazil demonstrated clearly the impact of public opinion on the outcome of a vaccination campaign , and the difficulty in dealing with scare events ., For example , no official press release was taken at face value , as it was always colored by political issues 5 ., In multiple occasions , people reported to the press that they would do the exact opposite of what was being recommended by public health authorities due to their mistrust of such authorities ., This example shows us the complexity of modeling and predicting the success of disease containment strategies ., The goal of this work was to integrate into a unified dynamical modeling framework , the opinion and decision components that underlie the public response to mass vaccination campaigns , specially when vaccine or disease scares have a chance to occur ., The proposed analytical framework , although not intentionally parameterized to match any specific real scenario , qualitatively captured the temporal dynamics of vaccine uptake in Brasilia ( Figure 1 ) , a clear case of disease scare ( compare with simulation results , presented on Figure 2 ) ., After conducting large scale studies on the acceptance of the Influenza vaccine , Chapman et al . 13 conclude that perceived side-effects and effectiveness of vaccination are important factors in peoples decision to vaccinate ., Our model suggests that , if the perception of disease risk is high , it leads to a higher initial willingness to vaccinate , while adverse events of vaccination , even when widely publicized by the media , tend to have less impact on vaccination coverage ., VAE are more effective when happening at the beginning of vaccination campaigns , when they can sway the opinions of a larger audience ., Although disease scare can counteract , to a certain extent the undesired effects of VAE , public health officials must also be aware of the risks involved in overusing disease risk information , in vaccination campaign advertisements since this can lead to a rush towards immunization as seen in the 2008 Yellow Fever scare in Brazil ., Vaccinating behavior dynamics has been modelled in different ways in the recent literature , from behaviors that aim to maximize self-interest 12 to imitation behaviors 14 ., In this paper we modeled these perceptions dynamically , and showed its relevance to decision-making dynamics and the consequences to the underlying epidemiological system and efficacy of vaccination campaigns ., We highlight two aspects of our modeling approach that we think provide important contributions to the field ., First , the process through which people update beliefs which will direct their decisions , was modeled using a Bayesian framework ., We trust this approach to be the most natural one as the Bayesian definition of probability is based on the concept of belief and Bayesian inference methodology was developed as a representation human learning behavior 15 ., The learning process is achieved through an iterative incorporation of newly available information , which naturally fit into the standard Bayesian scheme ., Among the advantages of this approach is its ability to handle the entire probability distributions of the parameters of interest instead of operating on their expected values which would be the cased in a classical frequentist framework ., This is especially important where highly asymmetrical distributions are expected ., The resulting set of probability distributions , provides more complete model-based hypotheses to be tested against data ., The inferential framework has an added benefit of simplicity and computational efficiency due the use of conjugate priors , which gives us a closed-form expression for the Bayesian posterior without the need of complex posterior sampling algorithms such as MCMC ., The second contribution is the articulation between the belief and decision models through logarithmic pooling ., Logarithmic pooling has been applied in many fields 16 , 17 to derive consensus from multiple expert opinions described as probability distributions ., Genest et al . 15 , argue that Logarithmic pooling is the best way to combine probability distributions due to its property of \u201cexternal Bayesianity\u201d ., This means that finding the consensus among distributions commutes with revising distributions using the Bayes formula , with the consequence that the results of this procedure can be interpreted as a single Bayesian probability update ., Here , we apply logarithmic pooling to integrate the multiple sources of information ( equation ( 1 ) ) which go into the decision of whether or not to vaccinate ., In this context , the property of external bayesianity , is important since it allows the operations of pooling and Bayesian update ( of , equation ( 2 ) ) to be combined in any order , depending only on the availability of data ., This framework can be easily used as a base to compose more complex models ., Extended models might include multiple beliefs as a joint probability distribution , more layers of decision or multiple , independently evolving belief systems ., The contact strucure of the model was intentionally kept as simple as possible , since the goal of the model was to focus on the belief dynamics ., Therefore , a reasonably simple epidemiological model , with a simple spatial structure ( local and global spaces ) was constructed to drive the belief dynamics without adding potentially confounding extra dynamics ., In this work we have played with various probability levels of VAEs and SDs in an attempt to cover the most common and likely more interesting portions of parameter space ., However , to model specific scenarios , data regarding the actual probabilities of VAEs and SDs are a pre-requisite ., Also important are data regarding the perception of vaccine safety and efficacy 18 , obtainable through opinion surveys which could also include questions about factors driving changes in vaccination behavior ., We therefore suggest that questions regarding these variables should be included in future surveys concerning vaccine-preventable diseases ., This would improve our ability to predict of the outcome of vaccination campaigns ., The belief model describes the temporal evolution of each individuals willingness to vaccinate , , in response to his evaluation of vaccine safety and disease risk ., To account for the uncertainties regarding vaccinating behavior , is modeled as a random variable , whose distribution is updated weekly as the individual observes new events ., The update process is based on logarithmically pooling with other random variables as described below ., Logarithmic pooling is a standard way of combining probability distribution representing opinions , to form a consensus 15 ., The belief update model takes the form: ( 1 ) where must equal one as act as weights of the pooling operation ., We attributed equal weights to and ( ) , with remaining taking values according to the following conditions:where is the number of serious disease cases witnessed by the individual , and and are random variables describing individuals belief regarding vaccine safety and disease risk , respectively ., The values for and are set to 1\/2 since either or are to be pooled against the combination of and : ., This choice of weights corresponds to the most unassuming scenario regarding the relative importance of each information source , different weights may be chosen for different scenarios ., Every individual starts off with a very low expected value for the Beta-distributed ., The last term in ( 1 ) , , is a reduction force which causes to move towards the minimum value of ., This term is important since without it , the psychological effects of witnessing serious disease events would continue to influence the individuals decisions for and indetermined period of time ., Thus , allows us to include the memory of such events in the model ., By setting appropriately , we can model events that leave no memory as well as ones that are retained indefinetly ., We model disease spread in a hypothetical city represented by a multilevel metapopulation individual-based model where individuals belong to groups that in turn belong to groups of groups , and so on ( Figure 9 ) , forming a hierarchy of scales 20 ., In this hypothetical city , individuals live in households with exactly 4 members each; neighborhoods are composed by 100 households and sets of 10 neighborhoods form the citys zones ., During the simulation , individuals commute between home and a randomly chosen neighborhood anywhere in the population graph ., Each individual has a probability 0 . 25 of leaving home daily ., This same hierarchical structure is used to define local and global events ., Locally visible events can only be witnessed by people living in the same neighborhood while globally visible events are visible to the entire population regardless of place of residence ., The epidemiological model describes a population being invaded by a new pathogen ., This pathogen causes an acute infection , lasting 11 days ( incubation period of 6 days and an infectious period of 5 days ) ., Once in the infectious period , individuals have a fixed probability , of becoming seriously ill ., After recovery , individuals become fully immune ., The proportion of the population in each immunological state at time is labeled as and , which stands for susceptibles , exposed , infectious and recovered states ., At the same time the disease is introduced in the population , a vaccination campaign is started , making available doses per week to the entire population , meaning that individuals may have to compete for a dose if many decide to vaccinate at the same time ., Once an individual is vaccinated , if he\/she has not been exposed yet , he\/she moves directly to the recovered class , with full immunity ( thus , a perfect vaccine is assumed ) ., If the individual is in the incubation period of the disease , disease progression is unaffected by vaccination ., Vaccination carries with it a fixed chance of causing adverse effects ., Transmission dynamics is modelled as follows: at each discrete time step , , each individual contacts others in two groups: in his residence and in the public space ., The probability of getting infected at home is given by where is the probability of transmission per household contact and is the number of infected members in the house ., In the public space , that is , in the neighborhood chosen as destination for the daily commutations , each infected person contacts persons at random , and if the contact is with a susceptible , infection is transmitted with probability .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Individual perception of vaccine safety is an important factor in determining a persons adherence to a vaccination program and its consequences for disease control ., This perception , or belief , about the safety of a given vaccine is not a static parameter but a variable subject to environmental influence ., To complicate matters , perception of risk ( or safety ) does not correspond to actual risk ., In this paper we propose a way to include the dynamics of such beliefs into a realistic epidemiological model , yielding a more complete depiction of the mechanisms underlying the unraveling of vaccination campaigns ., The methodology proposed is based on Bayesian inference and can be extended to model more complex belief systems associated with decision models ., We found the method is able to produce behaviors which approximate what has been observed in real vaccine and disease scare situations ., The framework presented comprises a set of useful tools for an adequate quantitative representation of a common yet complex public-health issue ., These tools include representation of beliefs as Bayesian probabilities , usage of logarithmic pooling to combine probability distributions representing opinions , and usage of natural conjugate priors to efficiently compute the Bayesian posterior ., This approach allowed a comprehensive treatment of the uncertainty regarding vaccination behavior in a realistic epidemiological model .","summary":"A frequently made assumption in population models is that individuals make decisions in a standard way , which tends to be fixed and set according to the modelers view on what is the most likely way individuals should behave ., In this paper we acknowledge the importance of modeling behavioral changes ( in the form of beliefs\/opinions ) as a dynamic variable in the model ., We also propose a way of mathematically modeling dynamic belief updates which is based on the very well established concept of a belief as a probability distribution and its temporal evolution as a direct application of the Bayes theorem ., We also propose the use of logarithmic pooling as an optimal way of combining different opinions which must be considered when making a decision ., To argue for the relevance of this issue , we present a model of vaccinating behaviour with dynamic belief updates , modeled after real scenarios of vaccine and disease scare recorded in the recent literature .","keywords":"mathematics\/statistics, computational biology\/ecosystem modeling, infectious diseases\/epidemiology and control of infectious diseases","toc":null} +{"Unnamed: 0":425,"id":"journal.pcbi.1006960","year":2019,"title":"Modeling the temporal dynamics of the gut microbial community in adults and infants","sections":"There is increasing recognition that the human gut microbiome is a contributor to many aspects of human physiology and health including obesity , non-alcoholic fatty liver disease , inflammatory diseases , cancer , metabolic diseases , aging , and neurodegenerative disorders 1\u201314 ., This suggests that the human gut microbiome may play important roles in the diagnosis , treatment , and ultimately prevention of human disease ., These applications require an understanding of the temporal variability of the microbiota over the lifespan of an individual particularly since we now recognize that our microbiota is highly dynamic , and that the mechanisms underlying these changes are linked to ecological resilience and host health 15\u201317 ., Due to the lack of data and insufficient methodology , we currently have major gaps in our understanding of fundamental mechanisms related to the temporal behavior of the microbiome ., Critically , we currently do not have a clear characterization of how and why our gut microbiome varies in time , and whether these dynamics are consistent across humans ., It is also unclear whether we can define \u2018stable\u2019 or \u2018healthy\u2019 dynamics as opposed to \u2018abnormal\u2019 or \u2018unhealthy\u2019 dynamics , which could potentially reflect an underlying health condition or an environmental factor affecting the individual , such as antibiotics exposure or diet ., Moreover , there is no consensus as to whether the gut microbial community structure varies continuously or jumps between discrete community states , and whether or not these states are shared across individuals 18 , 19 ., Notably , recent work 20 suggests that the human gut microbiome composition is dominated by environmental factors rather than by host genetics , emphasizing the dynamic nature of this ecosystem ., The need for understanding the temporal dynamics of the microbiome and its interaction with host attributes have led to a rise in longitudinal studies that record the temporal variation of microbial communities in a wide range of environments , including the human gut microbiome ., These time series studies are enabling increasingly comprehensive analyses of how the microbiome changes over time , which are in turn beginning to provide insights into fundamental questions about microbiome dynamics 16 , 17 , 21 ., One of the most fundamental questions that still remains unanswered is to what degree the microbial community in the gut is deterministically dependent on its initial composition ( e . g . , microbial composition at birth ) ., More generally , it is unknown to what degree the microbial composition of the gut at a given time determines the microbial composition at a later time ., Additionally , there is only preliminary evidence of the long-term effects of early life events on the gut microbial community composition , and it is currently unclear whether these long-term effects traverse through a predefined set of potential trajectories 21 , 22 ., To address these questions , it is important to quantify the dependency of the microbial community at a given time on past community composition 23 , 24 ., This task has been previously studied in theoretical settings ., Specifically , the generalized Lotka-Volterra family of models infer changes in community composition through defined species-species or species-resource interaction terms , and are popular for describing internal ecological dynamics ., Recently , a few methods that rely on deterministic regularized model fitting using generalized Lotka-Volterra equations have been proposed ( e . g . , 25\u201327 ) ., Nonetheless , the importance of pure autoregressive factors ( a stochastic process in which future values are a function of the weighted sum of past values ) in driving gut microbial dynamics is , as yet , unclear ., Other approaches that utilize the full potential of longitudinal data , can often reveal insights about the autoregressive nature of the microbiome ., These include , for example , the sparse vector autoregression ( sVAR ) model , ( Gibbons et al . 24 ) , which assumes linear dynamics and is built around an autoregressive type of model , ARIMA Poisson ( Ridenhour et al . 28 ) , which assumes log-linear dynamics and suggests modeling the read counts along time using Poisson regression , and TGP-CODA ( Aijo et al . 2018 29 ) , which uses a Bayesian probabilistic model that combines a multinomial distribution with Gaussian processes ., Particularly , Gibbons et al . 24 , uses the sparse vector autoregression ( sVAR ) model to show evidence that the human gut microbial community has two dynamic regimes: autoregressive and non-autoregressive ., The autoregressive regime includes taxa that are affected by the community composition at previous time points , while the non-autoregressive regime includes taxa that their appearance in a specific time is random and or does not depend on the previous time points ., In this paper , we show that previous studies substantially underestimate the autoregressive component of the gut microbiome ., In order to quantify the dependency of taxa on past composition of the microbial community , we introduce Microbial community Temporal Variability Linear Mixed Model ( MTV-LMM ) , a ready-to-use scalable framework that can simultaneously identify and predict the dynamics of hundreds of time-dependent taxa across multiple hosts ., MTV-LMM is based on a linear mixed model , a heavily used tool in statistical genetics and other areas of genomics 30 , 31 ., Using MTV-LMM we introduce a novel concept we term \u2018time-explainability\u2019 , which corresponds to the fraction of temporal variance explained by the microbiome composition at previous time points ., Using time-explainability researchers can select the microorganisms whose abundance can be explained by the community composition at previous time points in a rigorous manner ., MTV-LMM has a few notable advantages ., First , unlike the sVAR model and the Bayesian approach proposed by Aijo et al . 29 , MTV-LMM models all the individual hosts simultaneously , thus leveraging the information across an entire population while adjusting for the host\u2019s effect ( e . g , . host\u2019s genetics or environment ) ., This provides MTV-LMM an increased power to detect temporal dependencies , as well as the ability to quantify the consistency of dynamics across individuals ., The Poisson regression method suggested by Ridenhour et al . 28 also utilizes the information from all individuals , but does not account for the individual effects , which may result in an inflated autoregressive component ., Second , MTV-LMM is computationally efficient , allowing it to model the dynamics of a complex ecosystem like the human gut microbiome by simultaneously evaluating the time-series of hundreds of taxa , across multiple hosts , in a timely manner ., Other methods , ( e . g . , TGP-CODA 29 , MDSINE 26 etc . ) can model only a small number of taxa ., Third , MTV-LMM can serve as a feature selection method , selecting only the taxa affected by the past composition of the microbiome ., The ability to identify these time-dependent taxa is crucial when fitting a time series model to study the microbial community temporal dynamics ., Finally , we demonstrate that MTV-LMM can serve as a standalone prediction model that outperforms commonly used models by an order of magnitude in predicting the taxa abundance ., We applied MTV-LMM to synthetic data , as suggested by Ajio et al . 2018 29 as well as to three real longitudinal studies of the gut microbiome ( David et al . 17 , Caporaso et al . 16 , and DIABIMMUNE 21 ) ., These datasets contain longitudinal abundance data using 16S rRNA gene sequencing ., Nonetheless , MTV-LMM is agnostic to the sequencing data type ( i . e . , 16s rRNA or shotgun sequencing ) ., Using MTV-LMM we find that in contrast to previous reports , a considerable portion of microbial taxa , in both infants and adults , display temporal structure that is predictable using the previous composition of the microbial community ., Moreover , we show that , on average , the time-explainability is an order of magnitude larger than previously estimated for these datasets ., We begin with an informal description of the main idea and utility of MTV-LMM ., A more comprehensive description can be found in the Methods ., MTV-LMM is motivated by our assumption that the temporal changes in the abundance of taxa are a time-homogeneous high-order Markov process ., MTV-LMM models the transitions of this Markov process by fitting a sequential linear mixed model ( LMM ) to predict the relative abundance of taxa at a given time point , given the microbial community composition at previous time points ., Intuitively , the linear mixed model correlates the similarity between the microbial community composition across different time points with the similarity of the taxa abundance at the next time points ., MTV-LMM is making use of two types of input data: ( 1 ) continuous relative abundance of focal taxa j at previous time points and ( 2 ) quantile-binned relative abundance of the rest of the microbial community at previous time points ., The output of MTV-LMM is prediction of continuous relative abundance , for each taxon , at future time points ., In order to apply linear mixed models , MTV-LMM generates a temporal kinship matrix , which represents the similarity between every pair of samples across time , where a sample is a normalization of taxa abundances at a given time point for a given individual ( see Methods ) ., When predicting the abundance of taxa j at time t , the model uses both the global state of the entire microbial community in the last q time points , as well as the abundance of taxa j in the previous p time points ., The parameters p and q are determined by the user , or can be determined using a cross-validation approach; a more formal description of their role is provided in the Methods ., MTV-LMM has the advantage of increased power due to a low number of parameters coupled with an inherent regularization mechanism , similar in essence to the widely used ridge regularization , which provides a natural interpretation of the model ., We evaluated MTV-LMM by testing its accuracy in predicting the abundance of taxa at a future time point using real time series data ., Such evaluation will mitigate overfitting , since the future data points are held out from the algorithm ., To measure accuracy on real data , we used the squared Pearson correlation coefficient between estimated and observed relative abundance along time , per taxon ., In addition we validated MTV-LMM using synthetic data , illustrating realistic dynamics and abundance distribution , as suggested by Aijo et al . 2018 29 ., Following 29 , we evaluate the performance of the model using the \u2018estimation-error\u2019 , defined to be the Euclidean distance between estimated and observed relative abundance , per time point ( see Supplementary Information S1 Note ) ., We used real time series data from three different datasets , each composed of longitudinal abundance data ., These three datasets are David et al . 17 ( 2 adult donors\u2014DA , DB\u2014average 250 time points per individual ) , Caporaso et al . 16 ( 2 adult donors\u2014M3 , F4\u2014average 231 time points per individual ) , and the DIABIMMUNE dataset 21 ( 39 infant donors\u2014average 28 time points per individual ) ., In these datasets , the temporal parameters p and q were estimated using a validation set , and ranged from 0 to 3 ., See Methods for further details ., We compared the results of MTV-LMM to common approaches that are widely used for temporal microbiome modeling , namely the AR ( 1 ) model ( see Methods ) , the sparse vector autoregression model sVAR 24 , the ARIMA Poisson regression 28 and TGP-CODA 29 ., Overall , MTV-LMM\u2019s prediction accuracy is higher than AR\u2019s ( Supplementary Information S1 Table ) and significantly outperforms both the sVAR method and the Poisson regression across all datasets , using real time-series data ( Fig 1 ) ., In addition , since TGP-CODA can not be fully applied to these real datasets ( due to scalability limitations ) , we used synthetic data , considering a scenario of 200 taxa and 70 time points with realistic dynamics and abundance distribution , as suggested by the authors of this method ., Similarly to the real data , MTV-LMM significantly outperforms all the compared methods ( Supplementary Information S1 Fig ) ., We applied MTV-LMM to the DIABIMMUNE infant dataset and estimated the species-species association matrix across all individuals , using 1440 taxa that passed a preliminary screening according to temporal presence-absence patterns ( see Methods ) ., We found that most of these effects are close to zero , implying a sparse association pattern ., Next , we applied a principal component analysis ( PCA ) to the estimated species-species associations and found a strong phylogenetic structure ( PerMANOVA P-value = 0 . 001 ) suggesting that closely related species have similar association patterns within the microbial community ( Fig 2 ) ., These findings are supported by Thompson et al . 32 , who suggested that ecological interactions are phylogenetically conserved , where closely related species interact with similar partners ., Gomez et al . 33 tested these assumptions on a wide variety of hosts and found that generalized interactions can be evolutionary conserved ., We note that the association matrix estimated by MTV-LMM should be interpreted with caution since the number of possible associations is quadratic in the number of species , and it is , therefore , unfeasible to infer with high accuracy all the associations ., However , we can still aggregate information across species or higher taxonomic levels to uncover global patterns of the microbial composition dynamics ( e . g . , principal component analysis ) ., In order to address the fundamental question regarding the gut microbiota temporal variation , we quantify its autoregressive component ., Namely , we quantify to what degree the abundance of different taxa can be inferred based on the microbial community composition at previous time points ., In statistical genetics , the fraction of phenotypic variance explained by genetic factors is called heritability and is typically evaluated under an LMM framework 30 ., Intuitively , linear mixed models estimate heritability by measuring the correlation between the genetic similarity and the phenotypic similarity of pairs of individuals ., We used MTV-LMM to define an analogous concept that we term time-explainability , which corresponds to the fraction of temporal variance explained by the microbiome composition at previous time points ., In order to highlight the effect of the microbial community , we next estimated the time-explainability of taxa in each dataset , using the parameters q = 1 , p = 0 ., The resulting model corresponds to the formula: taxat = microbiome community ( t\u22121 ) + individual effect ( t\u22121 ) + unknown effects ., Of the taxa we examined , we identified a large portion of them to have a statistically significant time-explainability component across datasets ., Specifically , we found that over 85% of the taxa included in the temporal kinship matrix are significantly explained by the time-explainability component , with estimated time-explainability average levels of 23% in the DIABIMMUNE infant dataset ( sd = 15% ) , 21% in the Caporaso et al . ( 2011 ) dataset ( sd = 15% ) and 14% in the David el al . dataset ( sd = 10% ) ( Fig 3 , Supplementary Information S2 Fig ) ., Notably , we found that higher time explanability is associated with higher prediction accuracy ( Supplementary Information S3 Fig ) ., As a secondary analysis , we aggregated the time-explainability by taxonomic order , and found that in some orders ( non-autoregressive orders ) all taxa are non-autoregressive , while in others ( mixed orders ) we observed the presence of both autoregressive and non-autoregressive taxa ( Fig 4 , Supplementary Information S4 Fig ) , where an autoregressive taxa have a statistically significant time-explainability component ., Particularly , in the DIABIMMUNE infant data set , there are 7244 taxa , divided into 55 different orders ., However , the taxa recognized by MTV-LMM as autoregressive ( 1387 out of 7244 ) are represented in only 19 orders out of the 55 ., The remaining 36 orders do not include any autoregressive taxa ., Unlike the autoregressive organisms , these non-autoregressive organisms carry a strong phylogenetic structure ( t-test p-value < 10\u221216 ) , that may indicate a niche\/habitat filtering ., This observation is consistent with the findings of Gibbons et al . 24 , who found a strong phylogenetic structure in the non-autoregressive organisms in the adult microbiome ., Notably , across all datasets , there is no significant correlation between the order dominance ( number of taxa in the order ) and the magnitude of its time-explainability component ( median Pearson r = 0 . 12 ) ., For example , in the DIABIMMUNE data set , the proportion of autoregressive taxa within the 19 mixed orders varies between 2% and 75% , where the average is approximately 20% ., In the most dominant order , Clostridiales ( representing 68% of the taxa ) , approximately 20% of the taxa are autoregressive and the average time-explainability is 23% ., In the second most dominant order , Bacteroidales , approximately 35% of the taxa are autoregressive and the average time-explainability is 31% ., In the Bifidobacteriales order , approximately 75% of the taxa are autoregressive , and the average time-explainability is 19% ( Fig 4 ) ., We hypothesize that the large fraction of autoregressive taxa in the Bifidobacteriales order , specifically in the infants dataset , can be partially attributed to the finding made by 34 , according to which some sub-species in this order appear to be specialized in the fermentation of human milk oligosaccharides and thus can be detected in infants but not in adults ., This emphasizes the ability of MTV-LMM to identify taxa that have prominent temporal dynamics that are both habitat and host-specific ., As an example of MTV-LMM\u2019s ability to differentiate autoregressive from non-autoregressive taxa within the same order , we examined Burkholderiales , a relatively rare order ( less than 2% of the taxa in the data ) with 76 taxa overall , where only 19 of which were recognized as autoregressive by MTV-LMM ., Indeed , by examining the temporal behavior of each non-autoregressive taxa in this order , we witnessed abrupt changes in abundance over time , where the maximal number of consecutive time points with abundance greater than 0 is very small ., On the other hand , in the autoregressive taxa , we witnessed a consistent temporal behavior , where the maximal number of consecutive time points with abundance greater than 0 is well over 10 ( Supplementary Information S5 Fig ) ., The colonization of the human gut begins at birth and is characterized by a succession of microbial consortia 35\u201338 , where the diversity and richness of the microbiome reach adult levels in early childhood ., A longitudinal study has recently been used to show that infant gut microbiome begins transitioning towards an adult-like community after weaning 39 ., This observation is validated using our infant longitudinal data set ( DIABIMMUNE ) by applying PCA to the temporal kinship matrix ( Fig 5 ) ., Our analysis reveals that the first principal component ( accounting for 26% of the overall variability ) is associated with time ., Specifically , there is a clear clustering of the time samples from the first nine months of an infant\u2019s life and the rest of the time samples ( months 10 \u2212 36 ) which may be correlated to weaning ., As expected , we find a strong autoregressive component in an infant microbiome , which is highly associated with temporal variation across individuals ., By applying PCA to the temporal kinship matrix , we demonstrate that there is high similarity in the microbial community composition of infants at least in the first 9 months ., This similarity increases the power of our algorithm and thus helps MTV-LMM to detect autoregressive taxa ., In contrast to the infant microbiome , the adult microbiome is considered relatively stable 16 , 40 , but with considerable variation in the constituents of the microbial community between individuals ., Specifically , it was previously suggested that each individual adult has a unique gut microbial signature 41\u201343 , which is affected , among others factors , by environmental factors 20 and host lifestyle ( i . e . , antibiotics consumption , high-fat diets 17 etc . ) ., In addition , 17 showed that over the course of one year , differences between individuals were much larger than variation within individuals ., This observation was validated in our adult datasets ( David et al . and Caporaso et al . ) by applying PCA to the temporal kinship matrices ., In both David et al . and Caporaso et al . , the first principal component , which accounts for 61% and 43% of the overall variation respectively , is associated with the individual\u2019s identity ( Fig 6 ) ., Using MTV-LMM we observed that despite the large similarity along time within adult individuals , there is also a non-negligible autoregressive component in the adult microbiome ., The fraction of variance explained by time across individuals can range from 6% up to 79% for different taxa ., These results shed more light on the temporal behavior of taxa in the adult microbiome , as opposed to that of infants , which are known to be highly affected by time 39 ., MTV-LMM uses a linear mixed model ( see 44 for a detailed review ) , a natural extension of standard linear regression , for the prediction of time series data ., We describe the technical details of the linear mixed model below ., We assume that the relative abundance levels of focal taxa j at time point t depend on a linear combination of the relative abundance levels of the microbial community at previous time points ., We further assume that temporal changes in relative abundance levels , in taxa j , are a time-homogeneous high-order Markov process ., We model the transitions of this Markov process using a linear mixed model , where we fit the p previous time points of taxa j as fixed effects and the q previous time points of the rest of the microbial community as random effects ., p and q are the temporal parameters of the model ., For simplicity of exposition , we present the generative linear mixed model that motivates the approach taken in MTV-LMM in two steps ., In the first step we model the microbial dynamics in one individual host ., In the second step we extend our model to N individuals , while accounting for the hosts\u2019 effect ., We first describe the model assuming there is only one individual ., Consider a microbial community of m taxa measured at T equally spaced time points ., We get as input an m \u00d7 T matrix M , where Mjt represents the relative-abundance levels of taxa j at time point t ., Let yj = ( Mj , p+1 , \u2026 , MjT ) t be a ( T \u2212, p ) \u00d7 1 vector of taxa j relative abundance , across T \u2212 p time points starting at time point p + 1 and ending at time point T . Let Xj be a ( T \u2212, p ) \u00d7 ( p + 1 ) matrix of p + 1 covariates , comprised of an intercept vector as well as the first p time lags of taxa j ( i . e . , the relative abundance of taxa j in the p time points prior to the one predicted ) ., Formally , for k = 1 we have X t k j = 1 , and for 1 < k \u2264 p + 1 we have X t k j = M j , t - k + 1 for t \u2265 k ., For simplicity of exposition and to minimize the notation complexity , we assume for now that p = 1 . Let W be an ( T \u2212, q ) \u00d7 q \u22c5 m normalized relative abundance matrix , representing the first q time lags of the microbial community ., For simplicity of exposition we describe the model in the case q = 1 , and then Wtj = Mjt ( in the more general case , we have Wtj = M\u2308j\/q\u2309 , t\u2212 ( j mod, q ) , where p , q \u2264 T \u2212 1 ) ., With these notations , we assume the following linear model:, y j = X j \u03b2 j + W u j + \u03f5 j , ( 1 ), where uj and \u03f5j are independent random variables distributed as uj\u223c N ( 0 m , \u03c3 u j 2 I m ) and \u03f5 j \u223c N ( 0 T - 1 , \u03c3 \u03f5 j 2 I T - 1 ) ., The parameters of the model are \u03b2j ( fixed effects ) , \u03c3 u j 2 , and \u03c3 \u03f5 j 2 . We note that environmental factors known to be correlated with taxa abundance levels ( e . g . , diet , antibiotic usage 17 , 20 ) can be added to the model as fixed linear effects ( i . e . , added to the matrix Xj ) ., Given the high variability in the relative abundance levels , along with our desire to efficiently capture the effects of multiple taxa in the microbial community on each focal taxa j , we represent the microbial community input data ( matrix M ) using its quantiles ., Intuitively , we would like to capture the information as to whether a taxa is present or absent , or potentially introduce a few levels ( i . e . , high , medium , and low abundance ) ., To this end , we use the quantiles of each taxa to transform the matrix M into a matrix M \u02dc , where M \u02dc j t \u2208 { 0 , 1 , 2 } depending on whether the abundance level is low ( below 25% quantile ) , medium , or high ( above 75% quantile ) ., We also tried other normalization strategies , including quantile normalization , which is typically used in gene expression eQTL analysis 45 , 46 , and the results were qualitatively similar ( see Supplementary Information S6 Fig ) ., We subsequently replace the matrix W by a matrix W \u02dc , which is constructed analogously to W , but using M \u02dc instead of M . Notably , both the fixed effect ( the relative abundance of yj at previous time points ) and the output of MTV-LMM are the continuous relative abundance ., The random effects are quantile-binned relative abundance of the rest of the microbial community at previous time points ( matrix W \u02dc ) ., Thus , our model can now be described as, y j = X j \u03b2 j + W \u02dc u j + \u03f5 j ( 2 ) So far , we described the model assuming we have time series data from one individual ., We next extend the model to the case where time series data is available from multiple individuals ., In this case , we assume that the relative abundance levels of m taxa , denoted as the microbial community , have been measured at T time points across N individuals ., We assume the input consists of N matrices , M1 , \u2026 , MN , where matrix Mi corresponds to individual i , and it is of size m \u00d7 T . Therefore , the outcome vector yj is now an n \u00d7 1 vector , composed of N blocks , where n = ( T \u2212 1 ) N , and block i corresponds to the time points of individual, i . Formally , y k j = M j , ( k m o d ( T - 1 ) ) \u2308 k \/ ( T - 1 ) \u2309 ., Similarly , we define Xj and W \u02dc as block matrices , with N different blocks , where corresponds to individual, i . When applied to multiple individuals , Model ( 2 ) may overfit to the individual effects ( e . g . , due to the host genetics and or environment ) ., In other words , since our goal is to model the changes in time , we need to condition these changes in time on the individual effects , that are unwanted confounders for our purposes ., We therefore construct a matrix H by randomly permuting the rows of each block matrix i in W \u02dc , where the permutation is conducted only within the same individual ., Formally , we apply permutation \u03c0i \u2208 ST\u22121 on the rows of each block matrix i , Mi , corresponding to individual i , where ST\u22121 is the set of all permutations of ( T \u2212 1 ) elements ., In each \u03c0i , we are simultaneously permuting the entire microbial community ., Hence , matrix H corresponds to the data of each one of the individuals , but with no information about the time ( since the data was shuffled across the different time points ) ., With this addition , our final model is given by, y j = X j \u03b2 j + W \u02dc u j + H r + \u03f5 j , ( 3 ), where u j \u223c N ( 0 m , \u03c3 u j 2 I m ) and \u03f5 j \u223c N ( 0 n , \u03c3 \u03f5 j 2 I n ) , and r \u223c N ( 0 m , \u03c3 r 2 I m ) ., It is easy to verify that an equivalent mathematical representation of model 3 can be given by, y j \u223c N ( X j \u03b2 j , \u03c3 A R j 2 K 1 + \u03c3 i n d 2 K 2 + \u03c3 \u03f5 j 2 I ) , ( 4 ), where \u03c3 A R j 2 = m \u03c3 u j 2 , K 1 = 1 m W \u02dc W \u02dc T , \u03c3 i n d 2 = m \u03c3 r 2 , K 2 = 1 m H H T . We will refer to K1 as the temporal kinship matrix , which represents the similarity between every pair of samples across time ( i . e . , represents the cross-correlation structure of the data ) ., We note that for the simplicity of exposition , we assumed so far that each sample has the same number of time points T , however in practice the number of samples may vary between the different individuals ., It is easy to extend the above model to the case where individual i has Ti time points , however the notations become cumbersome; the implementation of MTV-LMM , however takes into account a variable number of time points across the different individuals ., Once the distribution of yj is specified , one can proceed to estimate the fixed effects \u03b2j and the variance of the random effects using maximum likelihood approaches ., One common approach for estimating variance components is known as restricted maximum likelihood ( REML ) ., We followed the procedure described in the GCTA software package 47 , under \u2018GREML analysis\u2019 , originally developed for genotype data , and re-purposed it for longitudinal microbiome data ., GCTA implements the restricted maximum likelihood method via the average information ( AI ) algorithm ., Specifically , we performed a restricted maximum likelihood analysis using the function \u201c\u2013reml\u201d followed by the option \u201c\u2013mgrm\u201d ( reflects multiple variance components ) to estimate the variance explained by the microbial community at previous time points ., To predict the random effects by the BLUP ( best linear unbiased prediction ) method we use \u201c\u2013reml-pred-rand\u201d ., This option is actually to predict the total temporal effect ( called \u201cbreeding value\u201d in animal genetics ) of each time point attributed by the aggregated effect of the taxa used to estimate the temporal kinship matrix ., In both functions , to represent yj ( the abundance of taxa j at the next time point ) , we use the option \u201c\u2013pheno\u201d ., For a detailed description see Supplementary Information S3 Note ., We define the term time-explainability , denoted as \u03c7 , to be the temporal variance explained by the microbial community in the previous time points ., Formally , for taxa j we define, \u03c7 j = \u03c3 A R j 2 \u03c3 A R j 2 + \u03c3 i n d 2 + \u03c3 \u03f5 j 2, The time-explainability was estimated with GCTA , using the temporal kinship matrix ., In order to measure the accuracy of time-explainability estimation , the average confidence interval width was estimated by computing the confidence interval widths for all autoregressive taxa and averaging the results ., Additionally , we adjust the time-explainability P-values for multiple comparisons using the Benjamini-Hochberg method 48 ., We now turn to the task of predicting y t j using the taxa abundance in time t \u2212 1 ( or more generally in the last few time points ) ., Using our model notation , we are given xj and w \u02dc , the covariates associated with a newly observed time point t in taxa j , and we would like to predict y t j with the greatest possible accuracy ., For a simple linear regression model , the answer is simply taking the covariate vector x and multiplying it by the estimated coefficients \u03b2 ^ : y ^ t j = x T \u03b2 ^ ., This practice yields unbiased estimates ., However , when attempting prediction in the linear mixed model case , things are not so simple ., One could adopt the same approach , but since the effects of the random components are not directly estimated , the vector of covariates w \u02dc will not contribute directly to the predicted value of y t j , and will only affect the variance of the prediction , resulting in an unbiased but inefficient estimate ., Instead , one can use the correlation between the realized values of W \u02dcu , to attempt a better guess at the realization of w \u02dc u for the new sample ., This is achieved by computing the distribution of the outcome of the new sample conditional on the full dataset , by using the following property of the multivariate ","headings":"Introduction, Results, Materials and methods, Discussion","abstract":"Given the highly dynamic and complex nature of the human gut microbial community , the ability to identify and predict time-dependent compositional patterns of microbes is crucial to our understanding of the structure and functions of this ecosystem ., One factor that could affect such time-dependent patterns is microbial interactions , wherein community composition at a given time point affects the microbial composition at a later time point ., However , the field has not yet settled on the degree of this effect ., Specifically , it has been recently suggested that only a minority of taxa depend on the microbial composition in earlier times ., To address the issue of identifying and predicting temporal microbial patterns we developed a new model , MTV-LMM ( Microbial Temporal Variability Linear Mixed Model ) , a linear mixed model for the prediction of microbial community temporal dynamics ., MTV-LMM can identify time-dependent microbes ( i . e . , microbes whose abundance can be predicted based on the previous microbial composition ) in longitudinal studies , which can then be used to analyze the trajectory of the microbiome over time ., We evaluated the performance of MTV-LMM on real and synthetic time series datasets , and found that MTV-LMM outperforms commonly used methods for microbiome time series modeling ., Particularly , we demonstrate that the effect of the microbial composition in previous time points on the abundance of taxa at later time points is underestimated by a factor of at least 10 when applying previous approaches ., Using MTV-LMM , we demonstrate that a considerable portion of the human gut microbiome , both in infants and adults , has a significant time-dependent component that can be predicted based on microbiome composition in earlier time points ., This suggests that microbiome composition at a given time point is a major factor in defining future microbiome composition and that this phenomenon is considerably more common than previously reported for the human gut microbiome .","summary":"The ability to characterize and predict temporal trajectories of the microbial community in the human gut is crucial to our understanding of the structure and functions of this ecosystem ., In this study we develop MTV-LMM , a method for modeling time-series microbial community data ., Using MTV-LMM we find that in contrast to previous reports , a considerable portion of microbial taxa in both infants and adults display temporal structure that is predictable using the previous composition of the microbial community ., In reaching this conclusion we have adopted a number of concepts common in statistical genetics for use with longitudinal microbiome studies ., We introduce concepts such as time-explainability and the temporal kinship matrix , which we believe will be of use to other researchers studying microbial dynamics , through the framework of linear mixed models ., In particular we find that the association matrix estimated by MTV-LMM reveals known phylogenetic relationships and that the temporal kinship matrix uncovers known temporal structure in infant microbiome and inter-individual differences in adult microbiome ., Finally , we demonstrate that MTV-LMM significantly outperforms commonly used methods for temporal modeling of the microbiome , both in terms of its prediction accuracy as well as in its ability to identify time-dependent taxa .","keywords":"taxonomy, children, ecology and environmental sciences, microbiome, community structure, statistics, microbiology, multivariate analysis, age groups, phylogenetics, data management, non-coding rna, mathematics, infants, cellular structures and organelles, microbial genomics, families, research and analysis methods, computer and information sciences, medical microbiology, mathematical and statistical techniques, principal component analysis, evolutionary systematics, ribosomes, people and places, community ecology, biochemistry, rna, ribosomal rna, cell biology, ecology, nucleic acids, genetics, biology and life sciences, population groupings, physical sciences, genomics, evolutionary biology, statistical methods","toc":null} +{"Unnamed: 0":1864,"id":"journal.pcbi.1003985","year":2014,"title":"Segregating Complex Sound Sources through Temporal Coherence","sections":"Humans and animals can attend to a sound source and segregate it rapidly from a background of many other sources , with no learning or prior exposure to the specific sounds ., For humans , this is the essence of the well-known cocktail party problem in which a person can effortlessly conduct a conversation with a new acquaintance in a crowded and noisy environment 1 , 2 ., For frogs , songbirds , and penguins , this ability is vital for locating a mate or an offspring in the midst of a loud chorus 3 , 4 ., This capacity is matched by comparable object segregation feats in vision and other senses 5 , 6 , and hence understanding it will shed light on the neural mechanisms that are fundamental and ubiquitous across all sensory systems ., Computational models of auditory scene analysis have been proposed in the past to disentangle source mixtures and hence capture the functionality of this perceptual process ., The models differ substantially in flavor and complexity depending on their overall objectives ., For instance , some rely on prior information to segregate a specific target source or voice , and are usually able to reconstruct it with excellent quality 7 ., Another class of algorithms relies on the availability of multiple microphones and the statistical independence among the sources to separate them , using for example ICA approaches or beam-forming principles 8 ., Others are constrained by a single microphone and have instead opted to compute the spectrogram of the mixture , and then to decompose it into separate sources relying on heuristics , training , mild constraints on matrix factorizations 9\u201311 , spectrotemporal masks 12 , and gestalt rules 1 , 13 , 14 ., A different class of approaches emphasizes the biological mechanisms underlying this process , and assesses both their plausibility and ability to replicate faithfully the psychoacoustics of stream segregation ( with all their strengths and weaknesses ) ., Examples of the latter approaches include models of the auditory periphery that explain how simple tone sequences may stream 15\u201317 , how pitch modulations can be extracted and used to segregate sources of different pitch 18\u201320 , and models that handle more elaborate sound sequences and bistable perceptual phenomena 10 , 21\u201323 ., Finally , of particular relevance here are algorithms that rely on the notion that features extracted from a given sound source can be bound together by correlations of intrinsic coupled oscillators in neural networks that form their connectivity online 23 , 24 ., It is fair to say , however , that the diversity of approaches and the continued strong interest in this problem suggest that no algorithm has yet achieved sufficient success to render the \u201ccocktail party problem solved from a theoretical , physiological , or applications point of view ., While our approach echoes some of the implicit or explicit ideas in the above-mentioned algorithms , it differs fundamentally in its overall framework and implementation ., It is based on the notion that perceived sources ( sound streams or objects ) emit features , that are modulated in strength in a largely temporally coherent manner and that they evoke highly correlated response patterns in the brain ., By clustering ( or grouping ) these responses one can reconstruct their underlying source , and also segregate it from other simultaneously interfering signals that are uncorrelated with it ., This simple principle of temporal coherence has already been shown to account experimentally for the perception of sources ( or streams ) in complex backgrounds 25\u201328 ., However , this is the first detailed computational implementation of this idea that demonstrates how it works , and why it is so effective as a strategy to segregate spectrotemporally complex stimuli such as speech and music ., Furthermore , it should be emphasized that despite apparent similarities , the idea of temporal coherence differs fundamentally from previous efforts that invoked correlations and synchronization in the following ways 29\u201333: ( 1 ) coincidence here refers to that among modulated feature channels due to slow stimulus power ( envelope ) fluctuations , and not to any intrinsic brain oscillations; ( 2 ) coincidences are strictly done at cortical time-scales of a few hertz , and not at the fast pitch or acoustic frequency rates often considered; ( 3 ) coincidences are measured among modulated cortical features and perceptual attributes that usually occupy well-separated channels , unlike the crowded frequency channels of the auditory spectrogram; ( 4 ) coincidence must be measured over multiple time-scales and not just over a single time-window that is bound to be too long or too short for a subset of modulations; and finally ( 5 ) the details we describe later for how the coincidence matrices are exploited to segregate the sources are new and are critical for the success of this effort ., For all these reasons , the simple principle of temporal coherence is not easily implementable ., Our goal here is to show how to do so using plausible cortical mechanisms able to segregate realistic mixtures of complex signals ., As we shall demonstrate , the proposed framework mimics human and animal strategies to segregate sources with no prior information or knowledge of their properties ., The model can also gracefully utilize available cognitive influences such as attention to , or memory of specific attributes of a source ( e . g . , its pitch or timbre ) to segregate it from its background ., We begin with a sketch of the model stages , with emphasis on the unique aspects critical for its function ., We then explore how separation of feature channel responses and their temporal continuity contribute to source segregation , and the potential helpful role of perceptual attributes like pitch and location in this process ., Finally , we extend the results to the segregation of complex natural signals such as speech mixtures , and speech in noise or music ., The critical information for identifying the perceived sources is contained in the instantaneous coincidence among the feature channel pairs as depicted in the C-matrices ( Fig . 1B ) ., At each modulation rate , the coincidence matrix at time is computed by taking the outer product of all cortical frequency-scale outputs ( ) ., Such a computation effectively estimates simultaneously the average coincidence over the time window implicit in each rate , i . e . , at different temporal resolutions , thus retaining both short- and long-term coincidence measures crucial for segregation ., Intuitively , the idea is that responses from pairs of channels that are strongly positively correlated should belong to the same stream , while channels that are uncorrelated or anti-correlated should belong to different streams ., This decomposition need not be all-or-none , but rather responses of a given channel can be parceled to different streams in proportion to the degree of the average coincidence it exhibits with the two streams ., This intuitive reasoning is captured by a factorization of the coincidence matrix into two uncorrelated streams by determining the direction of maximal incoherence between the incoming stimulus patterns ., One such factorization algorithm is a nonlinear principal component analysis ( nPCA ) of the C-matrices 35 , where the principal eigenvectors correspond to masks that select the channels that are positively correlated within a stream , and parcel out the others to a different stream ., This procedure is implemented by an auto-encoder network with two rectifying linear hidden units corresponding to foreground and background streams as shown in Fig . 1B ( right panel ) ., The weights computed in the output branches of each unit are associated with each of the two sources in the input mixture , and the number of hidden units can be automatically increased if more than two segregated streams are anticipated ., The nPCA is preferred over a linear PCA because the former assigns the channels of the two ( often anti-correlated ) sources to different eigenvectors , instead of combining them on opposite directions of a single eigenvector 36 ., Another key innovation in the model implementation is that the nPCA decomposition is performed not directly on the input data from the cortical model ( which are modulated at rates ) , but rather on the columns of the C-matrices whose entries are either stationary or vary slowly regardless of the rates of the coincident channels ., These common and slow dynamics enables stacking all C-matrices into one large matrix decomposition ( Fig . 1B ) ., Specifically , the columns of the stacked matrices are applied ( as a batch ) to the auto-encoder network at each instant with the aim of computing weights that can reconstruct them while minimizing the mean-square reconstruction error ., Linking these matrices has two critical advantages: It ensures that the pair of eigenvectors from each matrix decomposition is consistently labeled across all matrices ( e . g . , source 1 is associated with eigenvector 1 in all matrices ) ; It also couples the eigenvectors and balances their contributions to the minimization of the MSE in the auto-encoder ., The weight vectors thus computed are then applied as masks on the cortical outputs ., This procedure is repeated at each time step as the coincidence matrices evolve with the changing inputs ., The separation of feature responses on different channels and their temporal continuity are two important properties of the model that allow temporal coherence to segregate sources ., Several additional perceptual attributes can play a significant role including pitch , spatial location , and timbre ., Here we shall focus on pitch as an example of such attributes ., Speech mixtures share many of the same characteristics already seen in the examples of Fig . 2 and Fig . 3 ., For instance , they contain harmonic complexes with different pitches ( e . g . , males versus females ) that often have closely spaced or temporally overlapped components ., Speech also possesses other features such as broad bursts of noise immediately followed or preceded by voiced segments ( as in various consonant-vowel combinations ) , or even accompanied by voicing ( voiced consonants and fricatives ) ., In all these cases , the syllabic onsets of one speaker synchronize a host of channels driven by the harmonics of the voicing , and that are desynchronized ( or uncorrelated ) with the channels driven by the other speaker ., Fig . 4A depicts the clean spectra of two speech utterances ( middle and right panels ) and their mixture ( left panel ) illustrating the harmonic spectra and the temporal fluctuations in the speech signal at 3\u20137 Hz that make speech resemble the earlier harmonic sequences ., The pitch tracks associated with each of these panels are shown below them ., Fig . 4B illustrates the segregation of the two speech streams from the mixture using all available coincidence among the spectral ( frequency-scale ) and pitch channels in the C-matrices ., The reconstructed spectrograms are not identical to the originals ( Fig . 4A ) , an inevitable consequence of the energetic masking among the crisscrossing components of the two speakers ., Nevertheless , with two speakers there are sufficient gaps between the syllables of each speaker to provide clean , unmasked views of the other speakers signal 40 ., If more speakers are added to the mix , such gaps become sparser and the amount of energetic masking increases , and that is why it is harder to segregate one speaker in a crowd if they are not distinguished by unique features or a louder signal ., An interesting aspect of speech is that the relative amplitudes of its harmonics vary widely over time reflecting the changing formants of different phonemes ., Consequently , the saliency of the harmonic components changes continually , with weaker ones dropping out of the mixture as they become completely masked by the stronger components ., Despite these changes , speech syllables of one speaker maintain a stable representation of a sufficient number of features from one time instant to the next , and thus can maintain the continuity of their stream ., This is especially true of the pitch ( which changes only slowly and relatively little during normal speech ) ., The same is true of the spectral region of maximum energy which reflects the average formant locations of a given speaker , reflecting partially the timbre and length of their vocal tract ., Humans utilize either of these cues alone or in conjunction with additional cues to segregate mixtures ., For instance , to segregate speech with overlapping pitch ranges ( a mixture of male speakers ) , one may rely on the different spectral envelopes ( timbres ) , or on other potentially different features such as location or loudness ., Humans can also exploit more complex factors such as higher-level linguistic knowledge and memory as we discuss later ., In the example of Fig . 4C , the two speakers of Fig . 4A are segregated based on the coincidence of only the spectral components conveyed by the frequency-scale channels ., The extracted speech streams of the two speakers resemble the original unmixed signals , and their reconstructions exhibit significantly less mutual interference than the mixture as quantified later . Finally , as we discuss in more detail below , it is possible to segregate the speech mixture based on the pattern of correlations computed with one \u201canchor\u201d feature such as the pitch channels of the female , i . e . , using only the columns of the C-matrix near the female pitch channels as illustrated in Fig . 4D ., Exactly the same logic can be applied to any auxiliary function that is co-modulated in the same manner as the rest of the speech signal ., For instance , one may \u201clook\u201d at the lip movements of a speaker which open and close in a manner that closely reflects the instantaneous power in the signal ( or its envelope ) as demonstrated in 41 ., These two functions ( inter-lip distance and the acoustic envelope ) can then be exploited to segregate the target speech much as with the pitch channels earlier ., Thus , by simply computing the correlation between the lip function ( Fig . 5B ) or the acoustic envelope ( Fig . 5C ) with all the remaining channels , an effective mask can be readily computed to extract the target female speech ( and the background male speech too ) ., This example thus illustrates how in general any other co-modulated features of the speech signal ( e . g . , location , loudness , timbre , and visual signals such as lip movements can contribute to segregation of complex mixtures ) ., The performance of the model is quantified with a database of 100 mixtures formed from pairs of male-female speech randomly sampled from the TIMIT database ( Fig . 6 ) where the spectra of the clean speech are compared to those of the corresponding segregated versions ., The signal-to-noise ratio is computed as ( 1 ) ( 2 ) where are the cortical representations of the segregated sentences and are the cortical representations of the original sentences and is the cortical representation of the mixture ., Average SNR improvement was 6 dB for mixture waveforms mixed at 0 dB ., Another way to demonstrate the effectiveness of the segregation is to compare the match between the segregated samples and their corresponding originals ., This is evidenced by the minimal overlap in Fig . 6B ( middle panel ) across the distributions of the coincidences computed between each segregated sentence and its original version versus the interfering speech ., To compare directly these coincidences for each pair of mixed sentences , the difference between coincidences in each mixture are scatter-plotted in the bottom panel ., Effective pairwise segregation ( e . g . , not extracting only one of the mixed sentences ) places the scatter points along the diagonal ., Examples of segregated and reconstructed audio files can be found in S1 Dataset ., So far , attention and memory have played no direct role in the segregation , but adding them is relatively straightforward ., From a computational point of view , attention can be interpreted as a focus directed to one or a few features or feature subspaces of the cortical model which enhances their amplitudes relative to other unattended features ., For instance , in segregating speech mixtures , one might choose to attend specifically to the high female pitch in a group of male speakers ( Fig . 4D ) , or to attend to the location cues or the lip movements ( Fig . 5C ) and rely on them to segregate the speakers ., In these cases , only the appropriate subset of columns of the C-matrices are needed to compute the nPCA decomposition ( Fig . 1B ) ., This is in fact also the interpretation of the simulations discussed in Fig . 3 for harmonic complexes ., In all these cases , the segregation exploited only the C-matrix columns marking coincidences of the attended anchor channels ( pitch , lip , loudness ) with the remaining channels ., Memory can also be strongly implicated in stream segregation in that it constitutes priors about the sources which can be effectively utilized to process the C-matrices and perform the segregation ., For example , in extracting the melody of the violins in a large orchestra , it is necessary to know first what the timbre of a violin is before one can turn the attentional focus to its unique spectral shape features and pitch range ., One conceptually simple way ( among many ) of exploiting such information is to use as \u2018template\u2019 the average auto-encoder weights ( masks ) computed from iterating on clean patterns of a particular voice or instrument , and use the resulting weights to perform an initial segregation of the desired source by applying the mixture to the stored mask directly ., A biologically plausible model of auditory cortical processing can be used to implement the perceptual organization of auditory scenes into distinct auditory objects ( streams ) ., Two key ingredients are essential: ( 1 ) a multidimensional cortical representation of sound that explicitly encodes various acoustic features along which streaming can be induced; ( 2 ) clustering of the temporally coherent features into different streams ., Temporal coherence is quantified by the coincidence between all pairs of cortical channels , slowly integrated at cortical time-scales as described in Fig . 1 ., An auto-encoder network mimicking Hebbian synaptic rules implements the clustering through nonlinear PCA to segregate the sound mixture into a foreground and a background ., The temporal coherence model segregates novel sounds based exclusively on the ongoing temporal coherence of their perceptual attributes ., Previous efforts at exploiting explicitly or implicitly the correlations among stimulus features differed fundamentally in the details of their implementation ., For example , some algorithms attempted to decompose directly the channels of the spectrogram representations 42 rather than the more distributed multi-scale cortical representations ., They either used the fast phase-locked responses available in the early auditory system 43 , or relied exclusively on the pitch-rate responses induced by interactions among the unresolved harmonics of a voiced sound 44 ., Both these temporal cues , however , are much faster than cortical dynamics ( >100 Hz ) and are highly volatile to the phase-shifts induced in different spectral regions by mildly reverberant environments ., The cortical model instead naturally exploits multi-scale dynamics and spectral analyses to define the structure of all these computations as well as their parameters ., For instance , the product of the wavelet coefficients ( entries of the C-matrices ) naturally compute the running-coincidence between the channel pairs , integrated over a time-interval determined by the time-constants of the cortical rate-filters ( Fig . 1 and Methods ) ., This insures that all coincidences are integrated over time intervals that are commensurate with the dynamics of the underlying signals and that a balanced range of these windows are included to process slowly varying ( 2 Hz ) up to rapidly changing ( 16 Hz ) features ., The biological plausibility of this model rests on physiological and anatomical support for the two postulates of the model: a cortical multidimensional representation of sound and coherence-dependent computations ., The cortical representation is the end-result of a sequence of transformations in the early and central auditory system with experimental support discussed in detail in 34 ., The version used here incorporates only a frequency ( tonotopic ) axis , spectrotemporal analysis ( scales and rates ) , and pitch analysis 37 ., However , other features that are pre-cortically extracted can be readily added as inputs to the model such as spatial location ( from interaural differences and elevation cues ) and pitch of unresolved harmonics 45 ., The second postulate concerns the crucial role of temporal coherence in streaming ., It is a relatively recent hypothesis and hence direct tests remain scant ., Nevertheless , targeted psychoacoustic studies have already provided perceptual support of the idea that coherence of stimulus-features is necessary for perception of streams 27 , 28 , 46 , 47 ., Parallel physiological experiments have also demonstrated that coherence is a critical ingredient in streaming and have provided indirect evidence of its mechanisms through rapidly adapting cooperative and competitive interactions between coherent and incoherent responses 26 , 48 ., Nevertheless , much more remains uncertain ., For instance , where are these computations performed ?, How exactly are the ( auto-encoder ) clustering analyses implemented ?, And what exactly is the role of attentive listening ( versus pre-attentive processing ) in facilitating the various computations ?, All these uncertainties , however , invoke coincidence-based computations and adaptive mechanisms that have been widely studied or postulated such as coincidence detection and Hebbian associations 49 , 50 ., Dimensionality-reduction of the coincidence matrix ( through nonlinear PCA ) allows us effectively to cluster all correlated channels apart from others , thus grouping and designating them as belonging to distinct sources ., This view bears a close relationship to the predictive clustering-based algorithm by 51 in which input feature vectors are gradually clustered ( or routed ) into distinct streams ., In both the coherence and clustering algorithms , cortical dynamics play a crucial role in integrating incoming data into the appropriate streams , and therefore are expected to exhibit for the most part similar results ., In some sense , the distinction between the two approaches is one of implementation rather than fundamental concepts ., Clustering patterns and reducing their features are often ( but not always ) two sides of the same coin , and can be shown under certain conditions to be largely equivalent and yield similar clusters 52 ., Nevertheless , from a biological perspective , it is important to adopt the correlation view as it suggests concrete mechanisms to explore ., Our emphasis thus far has been on demonstrating the ability of the model to perform unsupervised ( automatic ) source segregation , much like a listener that has no specific objectives ., In reality , of course , humans and animals utilize intentions and attention to selectively segregate one source as the foreground against the remaining background ., This operational mode would similarly apply in applications in which the user of a technology identifies a target voice to enhance and isolate from among several based on the pitch , timbre , location , or other attributes ., The temporal coherence algorithm can be readily and gracefully adapted to incorporate such information and task objectives , as when specific subsets of the C-matrix columns are used to segregate a targeted stream ( e . g . , Fig . 3 and Fig . 4 ) ., In fact , our experience with the model suggests that segregation is usually of better quality and faster to compute with attentional priors ., In summary , we have described a model for segregating complex sound mixtures based on the temporal coherence principle ., The model computes the coincidence of multi-scale cortical features and clusters the coherent responses as emanating from one source ., It requires no prior information , statistics , or knowledge of source properties , but can gracefully incorporate them along with cognitive influences such as attention to , or memory of specific attributes of a target source to segregate it from its background ., The model provides a testable framework of the physiological bases and psychophysical manifestations of this remarkable ability ., Finally , the relevance of these ideas transcends the auditory modality to elucidate the robust visual perception of cluttered scenes 53 , 54 ., Sound is first transformed into its auditory spectrogram , followed by a cortical spectrotemporal analysis of the modulations of the spectrogram ( Fig . 1A ) 34 ., Pitch is an additional perceptual attribute that is derived from the resolved ( low-order ) harmonics and used in the model 37 ., It is represented as a \u2018pitch-gram\u2019 of additional channels that are simply augmented to the cortical spectral channels prior to subsequent rate analysis ( see below ) ., Other perceptual attributes such as location and unresolved harmonic pitch can also be computed and represented by an array of channels analogously to the pitch estimates ., The auditory spectrogram , denoted by , is generated by a model of early auditory processing 55 , which begins with an affine wavelet transform of the acoustic signal , followed by nonlinear rectification and compression , and lateral inhibition to sharpen features ., This results in F\\u200a=\\u200a128 frequency channels that are equally spaced on a logarithmic frequency axis over 5 . 2 octaves ., Cortical spectro-temporal analysis of the spectrogram is effectively performed in two steps 34: a spectral wavelet decomposition followed by a temporal wavelet decomposition , as depicted in Fig . 1A ., The first analysis provides multi-scale ( multi-bandwidth ) views of each spectral slice , resulting in a 2D frequency-scale representation ., It is implemented by convolving the spectral slice with complex-valued spectral receptive fields similar to Gabor functions , parametrized by spectral tuning , i . e . , ., The outcome of this step is an array of FxS frequency-scale channels indexed by frequency and local spectral bandwidth at each time instant t ., We typically used =\\u200a2 to 5 scales in our simulations ( e . g . , cyc\/oct ) , producing copies of the spectrogram channels with different degrees of spectral smoothing ., In addition , the pitch of each spectrogram frame is also computed ( if desired ) using a harmonic template-matching algorithm 37 ., Pitch values and saliency were then expressed as a pitch-gram ( P ) channels that are appended to the frequency-scale channels ( Fig . 1B ) ., The cortical rate-analysis is then applied to the modulus of each of the channel outputs in the freq-scale-pitch array by passing them through an array of modulation-selective filters ( ) , each indexed by its center rate which range over Hz in octave steps ( Fig . 1B ) ., This temporal wavelet analysis of the response of each channel is described in detail in 34 ., Therefore , the final representation of the cortical outputs ( features ) is along four axes denoted by ., It consists of coincidence matrices per time frame , each of size x ( ( Fig . 1B ) ., The exact choice of all above parameters is not critical for the model in that the performance changes very gradually when the parameters or number of feature channels are altered ., All parameter values in the model were chosen based on previous simulations with the various components of the model ., For example , the choice of rates ( 2\u201332 Hz ) and scales ( 1\u20138 cyc\/oct ) reflected their utility in the representation of speech and other complex sounds in numerous previous applications of the cortical model 34 ., Thus , the parameters chosen were known to reflect speech and music , but ofcourse could have been chosen differently if the stimuli were drastically different ., The least committal choice is to include the largest range of scales and rates that is computationally feasible ., In our implementations , the algorithm became noticeably slow when , , , and ., The decomposition of the C-matrices is carried out as described earlier in Fig . 1B ., The iterative procedure to learn the auto-encoder weights employs Limited-memory Broyden-Fletcher-Goldfarb-Shannon ( L-BFGS ) method as implemented in 56 ., The output weight vectors ( Fig . 1B ) thus computed are subsequently applied as masks on the input channels ., This procedure that is repeated every time step using the weights learned in the previous time step as initial conditions to ensure that the assignment of the learned eigenvectors remains consistent over time ., Note that the C matrices do not change rapidly , but rather slowly , as fast as the time-constants of their corresponding rate analyses allow ( ) ., For example , for the Hz filters , the cortical outputs change slowly reflecting a time-constant of approximately 250 ms . More often , however , the C-matrix entries change much slower reflecting the sustained coincidence patterns between different channels ., For example , in the simple case of two alternating tones ( Fig . 2A ) , the C-matrix entries reach a steady state after a fraction of a second , and then remain constant reflecting the unchanging coincidence pattern between the two tones ., Similarly , if the pitch of a speaker remains relatively constant , then the correlation between the harmonic channels remains approximately constant since the partials are modulated similarly in time ., This aspect of the model explains the source of the continuity in the streams ., The final step in the model is to invert the masked cortical outputs back to the sound 34 .","headings":"Introduction, Results, Discussion, Methods","abstract":"A new approach for the segregation of monaural sound mixtures is presented based on the principle of temporal coherence and using auditory cortical representations ., Temporal coherence is the notion that perceived sources emit coherently modulated features that evoke highly-coincident neural response patterns ., By clustering the feature channels with coincident responses and reconstructing their input , one may segregate the underlying source from the simultaneously interfering signals that are uncorrelated with it ., The proposed algorithm requires no prior information or training on the sources ., It can , however , gracefully incorporate cognitive functions and influences such as memories of a target source or attention to a specific set of its attributes so as to segregate it from its background ., Aside from its unusual structure and computational innovations , the proposed model provides testable hypotheses of the physiological mechanisms of this ubiquitous and remarkable perceptual ability , and of its psychophysical manifestations in navigating complex sensory environments .","summary":"Humans and many animals can effortlessly navigate complex sensory environments , segregating and attending to one desired target source while suppressing distracting and interfering others ., In this paper , we present an algorithmic model that can accomplish this task with no prior information or training on complex signals such as speech mixtures , and speech in noise and music ., The model accounts for this ability relying solely on the temporal coherence principle , the notion that perceived sources emit coherently modulated features that evoke coincident cortical response patterns ., It further demonstrates how basic cortical mechanisms common to all sensory systems can implement the necessary representations , as well as the adaptive computations necessary to maintain continuity by tracking slowly changing characteristics of different sources in a scene .","keywords":"auditory cortex, machine learning algorithms, neural networks, engineering and technology, noise control, audio signal processing, signal processing, brain, neuroscience, hearing, noise reduction, artificial neural networks, artificial intelligence, computational neuroscience, acoustical engineering, computer and information sciences, auditory system, speech signal processing, anatomy, biology and life sciences, sensory systems, sensory perception, computational biology, cognitive science, machine learning","toc":null} +{"Unnamed: 0":1422,"id":"journal.pcbi.1004014","year":2014,"title":"Bilinearity in Spatiotemporal Integration of Synaptic Inputs","sections":"For information processing , a neuron receives and integrates thousands of synaptic inputs from its dendrites and then induces the change of its membrane potential at the soma ., This process is usually known as dendritic integration 1\u20133 ., The dendritic integration of synaptic inputs is crucial for neuronal computation 2\u20134 ., For example , the integration of excitatory and inhibitory inputs has been found to enhance motion detection 5 , regularize spiking patterns 6 , and achieve optimal information coding 7 in many sensory systems ., They have also been suggested to be able to fine tune information processing within the brain , such as the modulation of frequency 8 and the improvement of the robustness 9 of gamma oscillations ., In order to understand how information is processed in neuronal networks in the brain , it is important to understand the computational rules that govern the dendritic integration of synaptic inputs ., Dendritic integration has been brought into focus with active experimental investigations ( see reviews 1 , 10 and references therein ) ., There have also been many theoretical developments based on physiologically realistic neuron models 11 , 12 ., Among those works , only a few investigate quantitative dendritic integration rules for a pair of excitatory and inhibitory inputs 3 , 13 and there has yet to be an extensive investigation of the integration of a pair of excitatory inputs or a pair of inhibitory inputs ., In this work , we propose a precise quantitative rule to characterize the dendritic integration for all types of synaptic inputs and validate this rule via realistic neuron modeling and electrophysiological experiments ., We first develop a theoretical approach to quantitatively characterize the spatiotemporal dendritic integration ., Initially , we introduce an idealized two-compartment passive cable model to understand the mathematical structure of the dendritic integration rule ., We then verify the rule by taking into account the complicated dendritic geometry and active ion channels ., For time-dependent synaptic conductance inputs , we develop an asymptotic approach to analytically solve the cable model ., In this approach , the membrane potential is represented by an asymptotic expansion with respect to the input strengths ., Consequently , a hierarchy of cable-type equations with different orders can be derived from the cable model ., These equations can be analytically solved order by order using the Greens function method ., The asymptotic solution to the second order approximation is shown to be in excellent agreement with the numerical solutions of the original cable model with physiologically realistic parameters ., Based on our asymptotic approach , we obtain a new theoretical result , namely , a nonlinear spatiotemporal dendritic integration rule for a pair of synaptic inputs: the summed somatic potential ( SSP ) can be well approximated by the summation of the two postsynaptic potentials and elicited separately , plus an additional third nonlinear term proportional to their product , i . e . , ( 1 ) The proportionality coefficient encodes the spatiotemporal information of the input signals , including the input locations and the input arrival times ., In addition , we demonstrate that the coefficient is nearly independent of the input strengths ., Because the correction term to the linear summation of and takes a bilinear form , we will refer to the rule ( 1 ) as the bilinear spatiotemporal dendritic integration rule ., In the remainder of the article , unless otherwise specified , all the membrane potentials will be referred to those measured at the soma ., We note that our bilinear integration rule is consistent with recent experimental observations 3 ., In the experiments 3 , the rule was examined at the time when the excitatory postsynaptic potential ( EPSP ) measured at the soma reaches its peak for a pair of excitatory and inhibitory inputs elicited concurrently ., We demonstrate that our bilinear integration rule is more general than that in Ref ., 3:, ( i ) our rule holds for a pair of excitatory and inhibitory inputs that can arrive at different times;, ( ii ) our rule is also valid at any time and is not limited to the peak time of the EPSP;, ( iii ) our rule is general for all types of paired synaptic input integration , including excitatory-inhibitory , excitatory-excitatory and inhibitory-inhibitory inputs ., Our bilinear integration rule is derived from the two-compartment passive cable model ., We then validate the rule in a biologically realistic pyramidal neuron model with active ion channels embedded ., The simulation results from the realistic model are consistent with the rule derived from the passive cable model ., We further validate the rule in electrophysiological experiments in rat hippocampal CA1 pyramidal neurons ., All of our results suggest that the form of the bilinear integration rule is preserved in the presence of active dendrites ., As mentioned previously , there are thousands of synaptic inputs received by a neuron in the brain ., We therefore further apply our analysis to describe the dendritic integration of multiple synaptic inputs ., We demonstrate that the spatiotemporal dendritic integration of all synaptic inputs can be decomposed into the sum of all possible pairwise dendritic integration , and each pair obeys the bilinear integration rule ( 1 ) , i . e . , ( 2 ) where denotes the SSP , denotes the individual EPSP , denotes the individual inhibitory postsynaptic potential ( IPSP ) , , , and are the corresponding proportionality coefficients with superscripts denoting the index of the synaptic inputs ., We then confirm the bilinear integration rule ( 2 ) numerically using realistic neuron modeling ., The decomposition of multiple inputs integration in rule ( 2 ) leads to a graph representation of the dendritic integration ., Each node in the graph corresponds to a synaptic input location , and each edge connecting two nodes represents the bilinear term for a pair of synaptic inputs given at the corresponding locations ., This graph evolves with time , and is all-to-all connected when stimuli are given at all synaptic sites simultaneously ., However , based on simulation results and experimental observations , we can estimate that there are only a small number of activated synaptic integration , or edges in the graph , within a short time interval ., Therefore , the graph representing the dendritic integration can indeed be functionally sparse ., Finally , we comment that , in general , it is theoretically challenging to analytically describe the dynamical response of a neuron with dendritic structures under time-dependent synaptic conductance inputs ., One simple approach to circumvent this difficulty is to analyze the steady state of neuronal input-output relationships by assuming that both the synaptic conductance and the membrane potential are constant 3 , 12 ., Such analyses can be applied to study dendritic integration , but they usually oversimplify the description of the spatial integration , and fail to describe the temporal integration ., Another approach to circumvent the difficulty is to study the cable model 14 , 15 analytically or numerically ., For the subthreshold regime , in which voltage-gated channels are weakly activated , the dendrites can be considered as a passive cable ., Along the cable , the membrane potential is linearly dependent on injected current input ., This linearity enables one to use the Greens function method to analytically obtain the membrane potential with externally injected current ., In contrast , the membrane potential depends nonlinearly on the synaptic conductance input 12 ., This nonlinearity greatly complicates mathematical analyses ., Therefore , in order to solve the cable model analytically , one usually makes the approximation of constant synaptic conductance 16 , 17 ., The approximation can help investigate some aspects of dendritic integration , however , the approximation in such a case is not sufficiently realistic because the synaptic conductances in vivo are generally time-dependent ., On the other hand , one can study the dendritic integration in the cable model numerically ., The compartmental modeling approach 14 enables one to solve the cable model with time-dependent synaptic inputs ., This approach has been used to investigate many aspects of dendritic integration ., For instance , it was discovered computationally that dendritic integration of excitatory inputs obeys a certain qualitative rule , i . e . , EPSPs are first integrated nonlinearly at individual branches before summed linearly at the soma 18 , 19 , which was verified later in experiments 20 , 21 ., Clearly , the computational approach can help gain insights into various phenomena of spatiotemporal dynamics observed at the dendrites , however , a deep , comprehensive understanding often requires analytical approaches ., Note that this point has also been emphasized in Ref ., 22 ., Here , our analytical asymptotic method can solve the cable model with time-dependent synaptic inputs analytically and reveal a precise quantitative spatiotemporal dendritic integration rule , as will be further illustrated below ., We begin to study the spatiotemporal dendritic integration of a pair of excitatory and inhibitory inputs ., An analytical derivation of the bilinear integration rule is described in the section of Derivation of the Rule ., The details of the cable model used in the derivation can be found in the section of Materials and Methods ., The validation of the bilinear integration rule using the realistic neuron modeling and electrophysiological experiments is described in the section of Validation of the Rule ., The spatial dependence of the coefficient in the rule is described in the section of Spatial Dependence of ., So far we have addressed the dendritic integration for a pair of excitatory and inhibitory inputs ., A natural question arises: how does a neuron integrate a pair of time-dependent synaptic conductance inputs with identical type ?, The dendritic integration of excitatory inputs has been extensively investigated in experiments ( reviewed in Ref . 1 ) , yet a precise quantitative characterization is still lacking ., According to our idealized cable model , given a pair of excitatory inputs with input strengths and at locations and and at times and , the dynamics of the membrane potential on the dendrite is governed by the following equation: ( 22 ) with the initial and boundary conditions the same as given in Equations ( 4 ) \u2013 ( 6 ) ., Similarly , we can represent its solution as an asymptotic series and solve it order by order to obtain the following bilinear integration rule: ( 23 ) where and are EPSPs induced by two individual excitatory inputs , and is the SSP when the two excitatory inputs are present ., Similar to the case of a pair of excitatory and inhibitory inputs , the shunting coefficient only depends on the excitatory input locations and the input time difference ., It does not depend on the EPSPs amplitudes ., Here will still be referred to as a shunting coefficient because the origin of the nonlinear integration for the paired excitatory inputs is exactly the same as that for the paired excitatory and inhibitory inputs from the passive cable model ., The bilinear integration rule ( 23 ) is found to be consistent with the numerical results obtained using the same realistic pyramidal neuron model as the one used in the section of Bilinear Rule for E\u2013I Integration ., For a pair of excitatory inputs with their locations fixed on the dendritic trunk , the rule holds when the amplitude of each EPSP is less than ., For the case of concurrent inputs , at the time when one of the EPSPs reaches its peak valueis found to be linearly dependent of , as shown in Fig . 6A ., This linear relationship indicates is independent of the amplitudes of the two EPSPs ., In addition , as shown in Fig . 6B , the bilinear integration rule is numerically verified in the time interval , for , within which the amplitude of EPSPs are relatively large ., For the case of nonconcurrent inputs , the bilinear integration rule is also numerically verified in the same way , as shown in Fig . 6C\u2013D ., In addition , we find that when the input strengths become sufficiently strong so as to make the depolarized membrane potential too large , i . e . , there is a deviation from the bilinear integration rule ( 23 ) ., This deviation can be ascribed to the voltage-gated ionic channel activities in our realistic pyramidal neuron model ., After blocking the active channels , the rule becomes valid with a different value of for large EPSPs amplitudes , as shown in Fig . 7 ., However , we note that , regardless of input strengths , the amplitude of SC is always two orders of magnitude smaller than the amplitude of SSP ., Therefore , the integration of two excitatory inputs can be naturally approximated by the linear summation of two individual EPSPs , i . e . ., We then perform electrophysiological experiments with a pair of excitatory synaptic inputs to confirm the linear summation ., As expected , this linear summation is also observed in our experiments for both concurrent and nonconcurrent input cases , as shown in Fig . 6E and 6F , respectively ., Note that , the linear summation is also consistent with experimental observations as reported in Ref ., 24 ., Similarly , for a pair of inhibitory inputs , we can arrive at the following bilinear integration rule from the cable model: ( 24 ) where and are IPSPs induced by two individual inhibitory inputs , and is the SSP when the two inhibitory inputs are present ., Here , is the shunting coefficient that is independent of the IPSPs amplitudes but is dependent on the input time difference and input locations ., The above bilinear integration rule ( 24 ) is consistent with our numerical results using the realistic pyramidal neuron model , as shown in Fig . 8A\u2013D ., Our electrophysiological experimental observations further confirm this rule , as shown in Fig . 8E\u2013H ., In the previous sections , we have discussed the integration of a pair of synaptic inputs ., In vivo , a neuron receives thousands of excitatory and inhibitory inputs from dendrites 2 ., Therefore , we now address the question of whether the integration rule derived for a pair of synaptic inputs can be generalized to the case of multiple inputs ., Our theoretical analysis shows that , for multiple inputs , the SSP can be approximated by the linear sum of all individual EPSPs and IPSPs , plus the bilinear interactions between all the paired inputs with shunting coefficients , , and respectively ( the superscript labels the synaptic inputs ) , i . e . , ( 25 ) We next validate the rule ( 25 ) using the realistic pyramidal neuron model ., It has been reported that , for a CA1 neuron , inhibitory inputs are locally concentrated on the proximal dendrites while excitatory inputs are broadly distributed on the entire dendrites 25 ., Based on this observation , we randomly choose 15 excitatory input locations and 5 inhibitory input locations on the model neurons dendrites ( Fig . 9A ) ., In the simulation , all inputs are elicited starting randomly from to ., In order to compare Equation ( 25 ) with the SSP simulated in the realistic neuron model , we first measure , , and pair by pair for all possible pairs ., We then record all membrane potential traces and induced by the corresponding individual synaptic inputs ., Our results show that the SSP measured from our simulation is indeed given by the bilinear integration rule ( 25 ) , as shown in Fig . 9B and 9C ., In contrast , the SSP in our numerical simulation deviates significantly from the linear summation of all individual EPSPs and IPSPs ., According to our bilinear integration rule ( 25 ) , the dendritic integration of multiple synaptic inputs can be decomposed into the summation of all possible pairwise dendritic integration ., Therefore , we can map dendritic computation in a dendritic tree onto a graph ., Each dendritic site corresponds to a node in the graph and the corresponding shunting component is mapped to the weight of the edge connecting the two nodes ., We refer to such a graph as a dendritic graph ., The dendritic graph is an all-to-all connected graph if all stimuli are given concurrently ( Fig . 10A ) ., However , the dendritic integration for all possible pairs of synaptic inputs is usually not activated concurrently in realistic situations ., For instance , if the arrival time difference between two inputs is sufficiently large , there is no interaction between them ., The activated level of the nonlinear dendritic integration for a pair of synaptic inputs can be quantified by the SC amplitude\u2014the weight of the edge in the graph ., The simulation result shows that the number of activated edges at any time is relatively small on the dendritic graph ( Fig . 10B\u2013D ) , compared with the total number of edges on the all-to-all connected graph ( Fig . 10A ) ., Therefore , for the case of a hippocampal pyramidal neuron , the dendritic graph could be functionally sparse in time ., The functional sparsity of a dendritic graph may also exist in neocortical pyramidal neurons ., In vivo , a cortical pyramidal neuron receives about synaptic inputs 26 ., Most of them are from other cortical neurons 27 , 28 , which typically fire about 10 spikes per second in awake animals 29 , 30 ., Thus , the neuron can be expected to receive synaptic inputs per second ., The average number of synaptic inputs within ( membrane potential time constants in vivo ) is ., The number of activated dendritic integration pairs within the interval is , which is relatively small compared with the total possible synaptic integration pairs ., Therefore , the activated integrations or edges in the dendritic graph within a short time window can be indeed functionally sparse ( ) ., In general , the neuronal firing rates vary across different cell types , cortical regions , brain states and so on ., Therefore , based on the above estimate , in an average sense , the graph of dendritic integration is functionally sparse ., Our bilinear dendritic integration rule ( 21 ) is consistent with the rule previously reported 3 , but is more general in the following aspects:, ( i ) Our dendritic integration rule holds at any time and is not limited to the time when the EPSP reaches its peak value ., ( ii ) The rule holds when the two inputs are even nonconcurrent ., This situation often occurs because the excitatory and inhibitory inputs may not always arrive at precisely the same time ., ( iii ) The form of the rule can be extended to describe the integration between a pair of excitatory inputs , a pair of inhibitory inputs , and even multiple inputs of mixed-types ., The spatiotemporal information of synaptic inputs interaction is coded in the shunting coefficient , which is a function of the input locations and input arrival time difference ., Our bilinear integration rule holds in the subthreshold regime for a large range of membrane potential ., When we derive the bilinear rule from the passive cable model , we assume that the input strengths or the amplitudes of membrane potentials require to be small ., This assumption forms the basis of the asymptotic analysis , because the second order asymptotic solutions of EPSP , IPSP and SSP converge to their exact solutions as the asymptotic parameters and ( denoting the excitatory and inhibitory input strengths ) approach zero ., In general , in the passive cable model , the bilinear rule will be more accurate for small amplitudes of EPSPs and IPSPs than large amplitudes ., Importantly , the assumption holds naturally that in the physiological regime when EPSP amplitude is less than 6mV and IPSP amplitude is less than -3mV , and are small ., However , even for EPSP amplitude close to the threshold , i . e . , 10mV , which is unusually large physiologically , we can show that the second order asymptotic solution can still well approximate the EPSP with a relative error less than 5% ., Thus the bilinear rule is still valid for large depolarizations near the threshold ., The validity of the bilinear rule for large membrane potentials is also confirmed in both simulations and experiments ., In particular , in the analysis of our experimental data , to validate the bilinear rule , we have already included all the data when the EPSP amplitude is below and close to the threshold because we have only excluded those data corresponding to the case when a neuron fires ., Our bilinear dendritic integration rule ( 21 ) is derived from the passive cable model ., However , the simulation results and the experimental observations demonstrate that the form of dendritic integration is preserved for active dendrites ., Additional simulation results show that for the same input locations , the shunting coefficients are generally larger on the active dendrites than those on the passive dendrites with all active channels blocked ., We also note that the value of in simulation is different from the value measured in experiments ., This difference may arise from the fact that some parameters of the passive membrane properties , such as the membrane leak conductance , may not be exactly the same as those in the biological neuron , and we have only used a limited set of ion channels in simulation compared with those in the biological neuron ., In addition , the input locations in the simulation and the experiments are different , which may also contribute to this derivation ., However , the bilinear form is a universal feature in both simulation and experiment ., By fixing excitatory input location while varying inhibitory input location , our model exhibits that there exists a region in the distal dendritic trunk within which the shunting inhibition can be more powerful , i . e , a larger , than in proximal dendrites ., This result is consistent with what is reported in Ref ., 31 ., Compared with Ref ., 31 , our work provides a different perspective of dendritic computation ., In their work , the multiple inhibitory inputs can induce a global shunting effect on the dendrites ., However , if we focus on the shunting effect only at the soma instead of the dendrites , our theory shows that all the interactions among multiple inputs can then be decomposed into pairwise interactions , as described by the bilinear integration rule ( 25 ) ., In addition , in this work , we focus on the somatic membrane potential that is directly related to the generation of an action potential ., However , it is also important to investigate the local integration of membrane potentials measured at a dendritic site instead of that measured at the soma ., Asymptotic analysis of the cable model can show that our bilinear integration rule is still valid for the description of the integration on the dendrites ., On the dendrites , the broadly distributed dendritic spines with high neck resistances 32 , 33 will filter a postsynaptic potential to a few millivolts on a branch 34 , 35 ., Within this regime our bilinear integration rule is valid ., Note that our rule may fail to capture the supralinear integration of synaptic inputs measured on the dendrites during the generation of a dendritic spike 36 ., However , if the integration is measured at the soma , our rule remains valid even when there is a dendritic spike induced by a strong excitatory input and an inhibitory synaptic input on different branches 3 ., The bilinear integration rule ( 25 ) can help improve the computational efficiency in a simulation of neuronal network with dendritic structures ., By our results , once the shunting coefficients for all pairs of input locations are measured , we can predict the neuronal response at the soma by the bilinear integration rule ( 25 ) ., By taking advantage of this , one can establish library-based algorithms to simulate the membrane potential dynamics of a biologically realistic neuron ., An example of a library-based algorithm can be found in Ref ., 37 ., To be specific , based on the full simulation of a realistic neuron model , we can measure the time-dependent shunting coefficient as a function of the arrival time difference and input locations for all possible pairs of synaptic inputs and record them in a library in advance ., For a particular simulation task , given the specific synaptic inputs on the dendrites , we can then search the library for the corresponding shunting coefficients to compute the neuronal response according to the bilinear integration rule ( 25 ) directly ., In such a computational framework , one can avoid directly solving partial differential equations that govern the spatiotemporal dynamics of dendrites and greatly reduces the computational cost for large-scale simulations of networks of neurons incorporating dendritic integration ., The animal-use protocol was approved by the Animal Management Committee of the State Key Laboratory of Cognitive Neuroscience & Learning , Beijing Normal University ( Reference NO . : IACUC-NKLCNL2013-10 ) ., We consider an idealized passive neuron whose isotropic spherical soma is attached to an unbranched cylindric dendrite with finite length and diameter ., Each small segment in the neuron can be viewed as an RC circuit with a constant capacitance and leak conductance density 11 , 38 ., The current conservation within a segment on the dendrite leads to ( 26 ) where is the membrane potential with respect to the resting potential on the dendrite , is the membrane capacitance per unit area , and is the leak conductance per unit area ., Here , is the synaptic current given by: ( 27 ) where and are excitatory and inhibitory synaptic conductance per unit area and and are their reversal potentials , respectively ., When excitatory inputs are elicited at dendritic sites and inhibitory inputs are elicited at dendritic sites , we have ( 28 ) where ., For a synaptic input of type , is the input strength of the input at the location , is the arrival time of the input at the location , is the input location ., The unitary conductance is often modeled as ( 29 ) with the peak value normalized to unity by the normalization factor , and with and as rise and decay time constants , respectively 38 ., Here is a Heaviside function ., The axial current can be derived based on the Ohms law , ( 30 ) where is the axial resistivity ., Taking the limit , Equation ( 26 ) becomes our unbranched dendritic cable model , ( 31 ) In particular , for a pair of excitatory and inhibitory inputs with strength and received at and , and at time and , respectively , we have ( 32 ) Similarly , for a pair of excitatory or inhibitory inputs with strengths and received at and , and at time and ( ) , respectively , we have ( 33 ) For the boundary condition of the cable model Equation ( 31 ) , we assume one end of the dendrite is sealed: ( 34 ) For the other end connecting to the soma , which can also be modeled as an RC circuit , by the law of current conservation , we have ( 35 ) where is the somatic membrane area , and is the somatic membrane potential ., The dendritic current flowing to the soma , , takes the form of Equation ( 30 ) at ., Because the membrane potential is continuous at the connection point ( 36 ) we arrive at the other boundary condition at : ( 37 ) For a resting neuron , the initial condition is simply set as ( 38 ) In the absence of synaptic inputs , Equation ( 31 ) is a linear system ., Using a impulse input , its Greens function can be obtained from ( 39 ) with the following boundary conditions and initial condition , For simplicity , letting , , , the solution of Equation ( 39 ) can be obtained from the following system , ( 40 ) with rescaled boundary and initial conditions , where ., Taking the Laplace transform of Equation ( 40 ) , we obtain ( 41 ) Combining the two boundary conditions ( is thus eliminated ) , we have ( 42 ) where ( 43 ) whose denominator is denoted as for later discussions ., For the inverse Laplace transform , we need to deal with singular points that are given by the roots of ., It can be easily verified that these singularities are simple poles and is analytic at infinity ., Then can be written as ( 44 ) where is a constant coefficient in the complex domain , and are the singular points ., Then taking the inverse Laplace transform of Equation ( 44 ) , we obtain ( 45 ) Now we only need to solve and in Equation ( 45 ) to obtain the Greens function of Equation ( 40 ) ., We solve the singular points first ., Defining yields ( 46 ) whose roots can be determined numerically ., There are solutions for with for and Next , to determine the factors we use the residue theorem for integrals ., For a contour that winds in the counter-clockwise direction around the pole and that does not include any other singular points , the integral of on this contour is given by ( 47 ) Using Equations ( 42\u201344 ) and ( 47 ) , we obtain ( 48 ) where ( 49 ) for ., The solution of the original Greens function for Equation ( 39 ) can now be expressed as ( 50 ) We first consider the case when a pair of excitatory and inhibitory inputs are received by a neuron ., Similar results can be obtained for a pair of excitatory inputs and a pair of inhibitory inputs ., For the physiological regime ( the amplitude of an EPSP being less than and the amplitude of an IPSP being less than ) , the corresponding required input strengths and are relatively small ., Therefore , given an excitatory input at location and time , and an inhibitory input at location and time , we represent as an asymptotic series in the powers of and , ( 51 ) Substituting Equation ( 51 ) into the cable equation ( 31 ) , order by order , we obtain a set of differential equations ., For the zeroth-order , we have ( 52 ) Using the boundary and initial conditions Equations ( 34 ) , ( 37 ) , and ( 38 ) , the solution is simply ( 53 ) For the first order of excitation , we have ( 54 ) With the help of Greens function , the solution can be expressed as ( 55 ) here \u2018\u2019 denotes convolution in time ., For the second order of excitation , we have ( 56 ) Because is given by Equation ( 55 ) , the solution of Equation ( 56 ) is ( 57 ) Similarly , we can have the first and second order inhibitory solutions , ( 58 ) ( 59 ) For the order of , we have ( 60 ) whose solution is obtained as follows , ( 61 ) For the numerical simulation of the two-compartment passive cable model Equation ( 3 ) , the Crank-Nicolson method 39 was used with time step and space step ., Parameters in our simulation are within the physiological regime 3 , 12 with , , , , , , , ., , , , and ., The time constants here were chosen to be consistent with the conductance inputs in the experiment 3 ., The realistic pyramidal model is the same as that in Ref ., 3 ., The morphology of the reconstructed pyramidal neuron includes 200 compartments and was obtained from the Duke-Southampton Archive of neuronal morphology 40 ., The passive cable properties and the density and distribution of active conductances in the model neruon were based on published experimental data obtained from hippocampal and cortical pyramidal neurons 18 , 19 , 34 , 41\u201350 ., We used the NEURON software Version 7 . 3 51 to simulate the model with time step ., The experimental measurements of summation of EPSPs or IPSPs in single hippocampal CA1 pyramidal cells in the acute brain slice followed a method described in Ref ., 3 , with some modifications ., A brief description of modified experimental procedure is as follows ., Acute hippocampal slices ( thick ) were prepared from Sprague Dawley rats ( postnatal day 14\u201316 ) , using a vibratome ( VT1200 , Leica ) ., The slices were incubated at 34\u00b0C for 30 min before transferring to a recording chamber perfused with the aCSF solution ( 2ml\/min; 30\u201332\u00b0C ) ., The aCSF contained ( in mM ) 125 NaCl , 3 KCl , 2 CaCl2 , 2 MgSO4 , 1 . 25 NaH2PO4 , 1 . 3 sodium ascorbate , 0 . 6 sodium pyruvate , 26 NaHCO3","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Neurons process information via integration of synaptic inputs from dendrites ., Many experimental results demonstrate dendritic integration could be highly nonlinear , yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically ., Based on asymptotic analysis of a two-compartment passive cable model , given a pair of time-dependent synaptic conductance inputs , we derive a bilinear spatiotemporal dendritic integration rule ., The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately , plus a third additional bilinear term proportional to their product with a proportionality coefficient ., The rule is valid for a pair of synaptic inputs of all types , including excitation-inhibition , excitation-excitation , and inhibition-inhibition ., In addition , the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations ., The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations ., This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons ., The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs ., The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration , where each paired integration obeys the bilinear rule ., This decomposition leads to a graph representation of dendritic integration , which can be viewed as functionally sparse .","summary":"A neuron , as a fundamental unit of brain computation , exhibits extraordinary computational power in processing input signals from neighboring neurons ., It usually integrates thousands of synaptic inputs from its dendrites to achieve information processing ., This process is known as dendritic integration ., To elucidate information coding , it is important to investigate quantitative spatiotemporal dendritic integration rules ., However , there has yet to be extensive experimental investigations to quantitatively describe dendritic integration ., Meanwhile , most theoretical neuron models considering time-dependent synaptic inputs are difficult to solve analytically , thus impossible to be used to quantify dendritic integration ., In this work , we develop a mathematical method to analytically solve a two-compartment neuron model with time-dependent synaptic inputs ., Using these solutions , we derive a quantitative rule to capture the dendritic integration of all types , including excitation-inhibition , excitation-excitation , inhibition-inhibition , and multiple excitatory and inhibitory inputs ., We then validate our dendritic integration rule through both realistic neuron modeling and electrophysiological experiments ., We conclude that the general spatiotemporal dendritic integration structure can be well characterized by our dendritic integration rule ., We finally demonstrate that the rule leads to a graph representation of dendritic integration that exhibits functionally sparse properties .","keywords":"computational neuroscience, neuroscience, biology and life sciences, computational biology","toc":null} +{"Unnamed: 0":811,"id":"journal.pcbi.1005088","year":2016,"title":"Exome Sequencing and Prediction of Long-Term Kidney Allograft Function","sections":"Survival of patients afflicted with End Stage Renal Disease ( ESRD ) is superior following kidney transplantation compared to dialysis therapy ., The short-term outcomes of kidney grafts have steadily improved since the early transplants with refinements in immunosuppressive regimens , use of DNA-based human leukocyte antigen ( HLA ) typing , and better infection prophylaxis 1\u20133 ., Despite these advances , data collected across the USA and Europe show that 40\u201350% of kidney allografts fail within ten years of transplantation 4 ., This observation strongly suggests that as yet uncharacterized factors , including genomic loci , may adversely impact long-term post-transplantation outcomes ., The HLA is a cluster of genes on the short arm of chromosome 6 and constitutes the major histocompatibility complex ( MHC ) responsible for self\/non-self discrimination in humans ., Multiple clinical studies have demonstrated the importance of HLA-matching to improve kidney graft outcome ., Therefore , in many countries , including the USA , donor kidney allocation algorithms includes consideration of HLA matching of the kidney recipient and donor ., With widespread incorporation of HLA matching in kidney organ allocation decisions , it has become clearer that HLA mismatching represents an important risk factor for kidney allograft failure but fails to fully account for the invariable decline in graft function and failure in a large number of recipients over time ., Indeed , only a 15% survival difference exist at 10 years post transplantation between the fully matched kidneys and the kidneys mismatched for both alleles at the HLA-A , B and DR loci 5 ., Findings from large cohorts of kidney graft recipients have also been studied to separate the immunological effect mediated by HLA and the non-HLA effects 6 ., Overall , prior observations suggest that mismatches at non-HLA loci in the genome could influence long-term graft outcomes ., Also , antibodies directed at HLA as well as non-HLA ( e . g . , MHC class I polypeptide-related sequence MICA ) have been associated with allograft rejection and reduced graft survival rates ., Indeed , it has been reported that the presence of anti-MICA antibodies in the pre-transplant sera is associated with graft failure despite HLA matching of the kidney recipient with the organ donor ., Here , we used exome sequencing to determine the sequences of the HLA as well as non-HLA peptides encoded by the donor organ and displayed on its cell surface , as well as bioinformatics analyses to determine donor sequences not present in the recipient ., The allogenomics approach integrates the unique features of transplantation , such as the existence of two genomes in a single individual , and the recipient\u2019s immune system mounting an immune response directed at either HLA or non-HLA antigens displayed by the donor kidney ., In this report , we show that this new concept helps predict long-term kidney transplant function from the genomic information available prior to transplantation ., We found that a statistical model that incorporates time as covariate , HLA , donor age and the AMS ( allogenomics mismatch score , introduced in this study ) , predicts graft function through time better than a model that includes the other factors and covariates , but not the AMS ., The allogenomics concept is the hypothesis that interrogation of the coding regions of the entire genome for both the organ recipient and organ donor DNA can identify the number of incompatible amino-acids ( recognized as non-self by the recipient ) that inversely correlates with long-term function of the kidney allograft ., Fig 1A is a schematic illustration of the allogenomics concept ., Because human autosomes have two copies of each gene , we consider two possible alleles in each genome of a transplant pair ., To this end , we estimate allogenomics score contributions between zero and two , depending on the number of different amino acids that the donor genome encodes for at a given protein position ., Fig 1B shows the possible allogenomics score contributions when the amino acids in question are either an alanine , or a phenylalanine or an aspartate amino acid ., The allogenomics mismatch score ( AMS ) is a sum of amino acid mismatch contributions ., Each contribution represents an allele coding for a protein epitope that the donor organ may express and that the recipient immune system could recognize as non-self ( see Equation 1 and 2 in Fig 1C and Materials and Methods and full description in S1 File ) ., We have developed and implemented a computational approach to estimate the AMS from genotypes derived for pairs of recipient and donor genomes ., ( See Materials and Methods for a detailed description of this approach and its software implementation , the allogenomics scoring tool , available at http:\/\/allogenomics . campagnelab . org . ), Our approach was designed to consider the entire set of protein positions measured by a genotyping assay , or restrict the analysis to a subset of positions P in the genome ., In this study , we focused on the subset of genomic sites P that encode for amino acids in trans-membrane proteins ., It is possible that some secreted or intra-cellular proteins can contribute to the allogenomics response , but the set of trans-membrane proteins was considered in this study in order to enrich contributions for epitopes likely to be displayed at the surface of donor kidney cells ., While proteins expressed in kidney could appear to be a better choice , the technical challenge of defining a list of proteins expressed by kidney alone , and perhaps only transiently in some kidney cell type exposed to the surface of the kidney , argues against relying on a kidney expression filter ., Similarly , we did not consider other sets of proteins , and make no claim that the set of transmembrane proteins is an optimal choice ., Because the AMS sums contributions from thousands of genomic sites across the genome , it is an example of a burden test , albeit summed across an entire exome ., The procedure is akin to averaging and the resulting score is much less sensitive to errors introduced by the genotyping assays or analysis approach than previous association studies which considered genotypes individually ., The AMS approach yields a single score per transplant ., This eliminates the need to correct for tens of thousands of statistical tests , which are common in classical association studies ., The allogenomics approach therefore also decreases the number of samples needed to reach statistical power ., In order to test the allogenomics hypothesis , we isolated DNA from kidney graft recipients and their living donors ., We assembled three cohorts: Discovery Cohort ( 10 transplant pairs ) where the allogenomics observation was first made ( these patients were a subset of patients enrolled in a multicenter Clinical Trial in Organ Transplantation-04 study of urinary cell mRNA profiling , from whom tissue\/cells were collected for future mechanistic studies 7 , 10 transplant pairs ) , and two validation cohorts: one from recipients transplanted at the New York Presbyterian Weill Cornell Medical Center ( Cornell Validation Cohort , 24 pairs ) , and a second validation cohort from recipients transplanted in Paris hospitals ( French Validation Cohort , 19 pairs ) ., Table 1 provides demographic and clinical information about the patients included in our study ., Exome data were obtained for each cohort ., For the Discovery cohort , we used the Illumina TrueSeq exome enrichment kit v3 , covering 62Mb of the human genome ., For the two validation cohorts , DNA sequencing was performed using the Agilent Haloplex assay covering 37Mb of the coding sequence of the human genome ., Primary sequence data analyses were conducted with GobyWeb 8 ( data and analysis management ) , Last 9 ( alignment to the genome ) and Goby 10 ( genotype calls ) ., Table A in S1 File provides statistics of coverage for the exome assays ., Kidney graft function is a continuous phenotype and is clinically evaluated by measuring serum creatinine levels or using estimated glomerular filtration rate ( eGFR ) 11 ., In this study , kidney graft function was evaluated at several time points for each recipient , with the precise time points varying by cohort ., In the discovery cohort , kidney allograft function was measured at 12 , 24 , 36 and 48 months following transplantation using serum creatinine levels and eGFR , calculated using the 2011 MDRD 11 formula ., We examined whether the allogenomics mismatch score is associated with post-transplantation allograft function ., In Fig 2 , we illustrate the association observed between AMS and creatinine levels or eGFR in the Discovery Cohort ., We found positive linear associations between the allogenomics mismatch score and serum creatinine level at 36 months post transplantation ( r2 adj . = 0 . 78 , P = 0 . 002 , n = 10 ) but not at 12 or 24 months following kidney transplantation ( Fig 2A , 2B and 2C ) ., We also found a negative linear relationship between the score and eGFR at 36 months post transplantation ( r2 adj . = 0 . 57 , P = 0 . 02 ) but not at 12 or 24 months following kidney transplantation ( Fig 2D , 2E and 2F ) ., These findings suggest that in the Discovery cohort the AMS is predictive of long-term graft function ., It is also possible that the AMS score would predict short-term graft function , but that more data is needed to detect smaller changes in eGFR at early time points , whereas cumulative effects on graft function become detectable at later time points ., Similar observations were made in the two validation cohorts ( see Figures A and B in S1 File ) and discussed in detail in an earlier preprint 12 ., In the models presented so far , we have considered the prediction of graft function separately at different time points ., An alternative analysis would consider time since transplantation , as well as other established predictors of graft function as covariates in the model ., This is particularly useful when studying cohorts where graft function was assessed at several distinct time points ( e . g . , in the French cohort , clinical data describes graft function from 1 to 96 months post transplantation , but few time points have observations for all recipients ) ., To implement this alternative analysis , we fit a mixed linear model of the form: eGFR ~ donor age at time of transplant + AMS + T + ( 1|P ) ( Equation 3 ) , where T is the time post-transplantation , measured in months , and ( 1|P ) a random effect which models separate model intercepts for each donor\/recipient pairs ., To determine the effect of AMS on eGFR , we compared the fit of models that did or did not include the AMS ., We found that the effect of AMS is significant ( P = 0 . 0042 , \u03c72 = 8 . 1919 , d . f . = 1 ) ., A similar result was obtained if HLA was also used as a covariate in the model ( i . e . , eGFR ~ donor age at time of transplant + AMS + T + HLA + ( 1|P ) ( Equation 4 ) , comparing model with AMS or without , P = 0 . 038 , \u03c72 = 4 . 284 , d . f . = 1 ) ., In contrast , models that included AMS , but did or did not include the number of ABDR HLA mismatches fit the data equally well ( testing the effect of HLA , P = 0 . 60 , \u03c72 = 0 . 2737 , d . f . = 1 ) , confirming that the effect of AMS was independent of the number of HLA mismatches ., The models of equations 3 and 4 include a random effect for the transplant pair ( 1|P ) term ., This term models the differences among pairs , such as level of graft function in the days post-transplantation , as well as correlations between repeated measurements for the same recipient ., See Fig C in S1 File for a more direct comparison between AMS and HLA ABDR mismatches ., This comparison indicates that there is a moderate correlation between AMS and the number of HLA ABDR mismatches ., Taken together , these results indicate that the predictive ability of the AMS effect is mostly independent of the number of ABDR mismatches at the HLA loci ., In order to determine if the AMS effect is robust , we fit the model from equation 3 in each cohort independently ., The estimates for the AMS effect are shown in Table 2 ., Despite a limited amount of data to fit the model in each cohort , the estimates are very similar , strongly suggesting that the AMS effect is robust and can be observed even in small cohorts ( 10 , 19 and 24 transplant pairs ) ., In Fig D in S1 File we plot the minor allele frequencies ( MAF ) of the variations that contribute to the AMS in the Discovery and Validation cohorts ., We find that many polymorphisms that contribute to the AMS have low MAF , indicating that they are rare in human populations ., This point needs to be considered for replication studies ., For instance , GWAS genotyping platforms may require adequate imputation to infer polymorphisms with low MAF ., Table 3 presents confidence intervals for the parameters of the full model ( equation 4 , including HLA term ) , fit across 53 transplant pairs , as well as the effective range of each of the model predictors ., The table shows the expected impact of each predictor on eGFR when this predictor is varied over its range , assuming all other predictors are kept constant ., For instance , assume that donor age at time of transplant varies from 20 years old to 80 years old ( range: 60 ) ., Across this range , eGFR will decrease by an estimated 28 units as the donor gets older ., The AMS effect has an effective range of 1 , 700 and the corresponding eGFR decrease is 19 units ., This comparison indicates that the strength of the AMS effect is similar to that of donor age and more than five times larger than the effect of HLA- ABDR mismatches ., While HLA-matching is a necessary requirement for successful hematopoietic cell transplants , full HLA compatibility is not an absolute prerequisite for all types of transplantations as indicated by the thousands of solid organ transplants performed yearly despite lack of full matching between the donor and recipient at the HLA-A , B and DR loci ., In view of better patient survival following transplantation compared to dialysis , kidney transplants have become the standard of care for patients with end stage kidney disease and transplants are routinely performed with varying degrees of HLA-class I and II mismatches ., Although , graft outcomes improve with better HLA-matching 13 , excellent long-term graft outcomes with stable graft function have been observed in patients with full HLA -ABDR mismatches ., The success of these transplants clearly suggests that factors other than HLA compatibility may influence the long-term clinical outcome of kidney allografts ., Furthermore , grafts do fail even with the best HLA match 13 , suggesting that antigens other than HLA are targets of alloimmune response ., Indeed , several non-HLA antibodies have been identified for renal and cardiac allograft recipients and found detrimental to long-term outcome 14 , 15 ., These antibodies were found to target antigens expressed on endothelial and epithelial cells but also on a variety of parenchymal and immune cells and can be measured prior to transplantation ., These prior studies support the notion that non-HLA antibodies can influence long-term outcome in transplantation ., Recipients of a kidney transplant have two genomes in their body: their germline DNA , and the DNA of the donor ., It is clear that a Mendelian genetic transmission mechanism is not at play in transplantation , yet , this assumption has been made in most of the transplantation genomic studies published to date 16 , 17 ., While several case-control studies have been conducted with large organ transplant cohorts , the identification of genotype\/phenotype associations has been limited to the discoveries of polymorphisms with small effect , that have been reviewed in 18 , and have often not been replicated 19\u201321 ., Rather than focusing on specific genomic sites , the allogenomics concept sums contributions of many mismatches that can impact protein sequence and structure and could engender an immune response in the graft recipient ., These allogenomics mismatches , captured in our study , represent the sequences of non-HLA trans-membrane proteins , some of which may help initiate cellular and humoral immunity directed at the allograft ., This study used eGFR as a surrogate marker for long-term graft survival ., The advantage of focusing on eGFR is that it is measured as part of clinical care on a yearly basis for each recipient , and eGFR has been associated with long-term outcome in multiple studies ., Since acute rejection has also been associated with a decrease in long-term graft survival , it may also serve as a surrogate marker for long-term kidney allograft survival ., Acute rejection however is a rare event with current immunosuppressive regimens and given the relatively small size of our study cohort , we would not have had sufficient cases to examine the association between acute rejection and the allogenomics score ., Another consideration for not using acute rejection is that acute rejection only represents a fraction of the mechanisms that lead to graft loss 22 ., The allogenomics concept that we present in this manuscript postulates a mechanism for the development of the immune response in the transplant recipient: immunological and biophysical principles strongly suggest that alleles present in the donor genome , but not in the recipient genome , will have the potential to produce epitopes that the recipient immune system will recognize as non-self ., This reasoning explains why the allogenomics score is not equivalent to the genetic measures of allele sharing distance that have been used to perform genetic clustering of individuals 23 ., This manuscript also suggests that allogenomic mismatches in proteins expressed at the surface of donor cells could explain why some recipients\u2019 immune systems mount an attack against the donor organ , while other patients tolerate the transplant for many years , when given similar immunosuppressive regimens ., If the results of this study are confirmed in additional independent transplant cohorts ( renal transplants , solid or hematopoeitic cell transplants ) , they may prompt the design of prospective clinical trials to evaluate whether allocating organs to recipients with a combination of low allogenomics mismatch scores and different HLA mismatch scores improves long term graft outcome ., A positive answer to this question could profoundly impact the current clinical and regulatory framework for assigning organs to ESRD patients ., In this study , we introduced the allogenomics concept to quantitatively estimate the histoincompatibility between living donor and recipient outside of the HLA loci ., We tested the simplest model derived from this concept to calculate an allogenomics mismatch score ( AMS ) reflecting the possible donor specific epitopes displayed on the cell surface ., We demonstrated that the AMS , which can be estimated before transplantation , helps predict post-transplantation kidney graft function more accurately than HLA-mismatches alone ., Interestingly , the strength of the correlation increases with the time post transplantation , an intriguing finding observed in both the discovery cohort and the validation cohorts ., We chose the simplest model to test the allogenomics concept and did not restrict the score to contributions from the peptides that can fit in the HLA groove despite their computational predictability 24 ., It is possible that such restriction would increase the score\u2019s ability to predict renal function post transplantation ., However , such a filter assumes that HLA and associated peptides are the only stimuli for the anti-allograft response and does not take into consideration allorecognition involving innate effectors ( NK cells or NKT cells for example , the Killer-cell Immunoglobulin-like Receptor KIR genes , iTCR , the invariant T Cell Receptor , and TLR , Toll Like Receptor , among others ) 25 ., The allogenomics concept incorporating amino acid mismatches capable of triggering adaptive as well as innate immunity could be considered an important strength of the approach ., Recent evidence indicates that mutations in splice sites , although rare , are responsible for a large proportion of disease risk 26 ., The allogenomics approach presented in this manuscript does not incorporate knowledge of how polymorphisms in splice sites affect protein sequences ., We anticipate that future developments would consider longer splice forms in the donor as allogenomics ., Such an approach could score additional donor protein residues as allogenomics mismatches when the sequence is not present in the predicted proteome of the recipient ., We chose to focus this study on living , ABO compatible ( either related or non-related ) donors because kidney transplantation can be planned in advance and because differences in cold ischemia times and other covariates common in deceased donor transplants are negligible when focusing on living donors , especially in small cohorts ., The selection criteria for deceased donors include consideration of HLA matching , calculated panel reactive antibody and the age of the recipient ., Compared to live donors we expect that the range of the AMS in deceased donors will be comparable to that in our discovery cohort composed primarily of unrelated donors ., Since many additional factors can independently influence graft function after transplantation from a deceased donor ( e . g . cold ischemia time ) , potentially much larger cohorts may be required in such settings to achieve sufficient power to adequately control for the covariates relevant to deceased donors and to detect the allogenomics effect ., While we have not attempted to optimize the set of sites considered to estimate the allogenomics mismatch score , it is possible that a reduced and more focused subsets of amino acid mismatches could increase the predictive ability of the score ., For instance , the AMS could be applied to look for genes with a high allogenomic mismatch burden ., Such studies would require larger cohorts and may enable the discovery of loci enriched in allogenomics mismatches responsible for a part of the recipient alloresponse against yet unsuspected donor antigens ., Their discovery might foster the development of new immunosuppressive agents targeting the expression of these immuno-dominant epitopes ., However , our study also raises a novel mechanistic hypothesis: the total burden of allogenomics mismatches might be more predictive of graft function , than mismatches at specific loci , as was previously widely expected 17 ., The study was reviewed and approved by the Weill Cornell Medical College Institutional Review Board ( protocol #1407015307 \u201cPredicting Long-Term Function of Kidney Allograft by Allogenomics Score\u201d , approved 09\/09\/2014 ) ., The second study involving the French cohort was approved by the Comit\u00e9 de Protection des Personnes ( CPP ) , Ile de France 5 , ( 02\/09\/2014 ) ., Codes were used to ensure donor and recipient anonymity ., All subjects gave written informed consent ., Living donor ABO compatible kidney transplantations were performed according to common immunological rules for kidney transplantation with a mandatory negative IgG T-cell complement-dependent cytotoxicity cross-match ., Briefly , genotypes of donors and recipients were assayed by exome sequencing ( Illumina TruSeq enrichment kit for the Discovery Cohort and Agilent Haloplex kit for the Cornell Validation Cohort and the French Validation Cohort ) ., Reads were aligned to the human genome with the Last 9 aligner integrated as a plugin in GobyWeb 8 ., Genotype calls were made with Goby 10 and GobyWeb 8 ., Prediction of polymorphism impact on the protein sequence were performed with the Variant Effect Predictor 27 ., Genes that contain at least one transmembrane segment were identified using Ensembl Biomart 28 ., We selected 10 kidney transplant recipients from those who had consented to participate in the Clinical Trials in Organ Transplantation-04 ( CTOT-04 ) , a multicenter observational study of noninvasive diagnosis of renal allograft rejection by urinary cell mRNA profiling ., We included only the recipients who had a living donor kidney transplant and along with their donors , had provided informed consent for the use of their stored biological specimens for future research ., Pairs were limited to those where enough DNA could be extracted to perform the exome assay for both donor and recipient ., Subjects were not selected on the basis of eGFR , whose values were collected after obtaining sequence data ., The demographic and clinical information of the Discovery cohort is shown in Table 1 ., DNA was extracted from stored peripheral blood using the EZ1 DNA blood kit ( Qiagen ) based on the manufacturer\u2019s recommendation ., DNA was enriched for exome regions with the TruSeq exome enrichment kit v3 ., Sequencing libraries were constructed using the Illumina TruSeq kit DNA sample preparation kit ., Briefly , 1 . 8 \u03bcg of genomic DNA was sheared to average fragment size of 200 bp using the Covaris E220 ( Covaris , Woburn , MA , USA ) ., Fragments were purified using AmpPureXP beads ( Beckman Coulter , Brae , CA , USA ) to remove small products ( <100 bp ) , yielding 1 \u03bcg of material that was end-polished , A-tailed and adapter ligated according to the manufacturer\u2019s protocol ., Libraries were subjected to minimal PCR cycling and quantified using the Agilent High Sensitivity DNA assay ( Agilent , Santa Clara , CA , USA ) ., Libraries were combined into pools of six for solution phase hybridization using the Illumina ( Illumina , San Diego , CA , USA ) TruSeq Exome Enrichment Kit ., Captured libraries were assessed for both quality and yield using the Agilent High Sensitivity DNA assay Library Quantification Kit ., Sequencing was performed with six samples per lane using the Illumina HiSeq 2000 sequencer and version 2 of the sequencing-by-synthesis reagents to generate 100 bp single-end reads ( 1\u00d7100SE ) ., We studied 24 kidney transplant recipients who had a living donor transplant at the NewYork-Presbyterian Weill Cornell Medical Center ., This was an independent cohort and none of the recipients had participated in the CTOT-04 trial ., Recipients were selected randomly based on the availability of archived paired recipient-donor DNA specimens obtained at the time of transplantation at our Immunogenetics and Transplantation Laboratory ., DNA extraction from peripheral blood was done using the EZ1 DNA blood kit ( Qiagen ) based on the manufacturer\u2019s recommendation ., We studied 19 kidney transplant recipients who had a living donor transplant at Tenon Hospital ., This represented a third independent cohort ., Recipients were selected randomly based on the availability of archived paired recipient-donor DNA specimens obtained either at the Laboratoire dhistocompatibilit\u00e9 , H\u00f4pital Saint Louis APHP , Paris or during patient\u2019s follow-up between October 2014 and January 2015 ., DNA extraction from peripheral blood was done using the Nucleospin blood L kit ( Macherey-Nagel ) based on the manufacturer\u2019s recommendation ., The Cornell and French Validation cohorts were both assayed with the Agilent Haloplex exome sequencing assay ., The Haloplex assay enriches 37 Mb of coding sequence in the human genome and was selected for the validation cohort because it provides a strong and consistent exome enrichment efficiency for regions of the genome most likely to contribute to the allogenomics contributions in protein sequences ., In contrast , the TrueSeq assay ( used for the Discovery Cohort ) enriches 63Mb of sequence and includes regions in untranslated regions ( 5\u2019 and 3\u2019 UTRs ) , which do not contribute to allogenomics scores and therefore do not need to be sequenced to estimate the score ., Libraries were prepared as per the Agilent recommended protocol ., Sequencing was performed on an Illumina 2500 sequencer with the 100bp paired-end protocol recommended by Agilent for the Haloplex assay ., Libraries were multiplexed 6 per lane to yield approximately 30 million paired end reads per sample ., We determined the minor allele frequency of sites used in the calculation of the allogenomics mismatch score using data from the Exome Aggregation Consortium ( ExAC ) ., This resource made it possible to estimate MAF for most of the variations that are observed in the subjects included in our discovery and validation cohort ., Data was downloaded and analyzed with R and MetaR scripts ( see analysis scripts provided at https:\/\/bitbucket . org\/campagnelaboratory\/allogenomicsanalyses ) ., We use the NHLBI Exome Sequencing Project ( ESP ) release ESP6500SI-V2 30 ., The ESP measured genotypes in a population of 6 , 503 individuals across the EA and AA populations using an exome-sequencing assay 30 ., Of 12 , 657 sites measured in the validation cohort with an allogenomics contribution strictly larger than zero ( 48 exomes , sites with contributions across 24 clinical pairs of transplants ) , 9 , 765 ( 78% ) have also been reported in ESP ( 6 , 503 exomes ) ., Illumina sequence base calling was performed at the Weill Cornell Genomics Core Facility ., Sequence data in FASTQ format were converted to the compact-reads format using the Goby framework 14 ., Compact-reads were uploaded to the GobyWeb8 system and aligned to the 1000 genome reference build for the human genome ( corresponding to hg19 , released in February 2009 ) using the Last 9 , 31 aligner ( parallelized in a GobyWeb 8 plugin ) ., Single nucleotide polymorphisms ( SNPs ) and small indels genotype were called using GobyWeb with the Goby 32 discover-sequence-variants mode ( parameters: minimum variation support = 3 , minimum number of distinct read indices = 3 ) and annotated using the Variant Effect Predictor 27 ( VEP version 75\u201375 . 7 ) from Ensembl ., The data were downloaded as a Variant Calling format 33 ( VCF ) file from GobyWeb 8 and further processed with the allogenomics scoring tool ( see http:\/\/allogenomics . campagnelab . org ) ., The allogenomics mismatch score \u0394 ( r , d ) is estimated for a recipient r and donor d as the sum of score mismatch contributions ( see Fig 1C and supplementary methods in S1 File ) ., Analyses were conducted with either JMP Pro version 11 ( SAS Inc . ) or metaR ( http:\/\/metaR . campagnelab . org ) ., Fig 2 as well as Figures in S1 File were constructed with metaR analysis scripts and edited with Illustrator CS6 to increase some font sizes or adjust the text of some axis labels ., The model that includes the time post-transplantation as a covariate was constructed in metaR and JMP ., The R implementation of train linear model uses the lm R function ., This model was executed using the R language 3 . 1 . 3 ( 2015-03-09 ) packaged in the docker image fac2003\/rocker-metar:1 . 4 . 0 ( https:\/\/hub . docker . com\/r\/fac2003\/rocker-metar\/ ) ., Models with random effects were estimated with metaR 1 . 5 . 1 and R ( train mixed model and compare mixed models statements , which use the lme4 R package 34 ) ., Comparison of fit for models with random effects was obtained by training each model alternative with REML = FALSE an performing an anova test , as described in 35 ., We distribute the code necessary to reproduce most of the analysis presented in this manuscript at https:\/\/bitbucket . org\/campagnelaboratory\/allogenomicsanalyses .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Current strategies to improve graft outcome following kidney transplantation consider information at the human leukocyte antigen ( HLA ) loci ., Cell surface antigens , in addition to HLA , may serve as the stimuli as well as the targets for the anti-allograft immune response and influence long-term graft outcomes ., We therefore performed exome sequencing of DNA from kidney graft recipients and their living donors and estimated all possible cell surface antigens mismatches for a given donor\/recipient pair by computing the number of amino acid mismatches in trans-membrane proteins ., We designated this tally as the allogenomics mismatch score ( AMS ) ., We examined the association between the AMS and post-transplant estimated glomerular filtration rate ( eGFR ) using mixed models , considering transplants from three independent cohorts ( a total of 53 donor-recipient pairs , 106 exomes , and 239 eGFR measurements ) ., We found that the AMS has a significant effect on eGFR ( mixed model , effect size across the entire range of the score: -19 . 4 -37 . 7 , -1 . 1 , P = 0 . 0042 , \u03c72 = 8 . 1919 , d . f . = 1 ) that is independent of the HLA-A , B , DR matching , donor age , and time post-transplantation ., The AMS effect is consistent across the three independent cohorts studied and similar to the strong effect size of donor age ., Taken together , these results show that the AMS , a novel tool to quantify amino acid mismatches in trans-membrane proteins in individual donor\/recipient pair , is a strong , robust predictor of long-term graft function in kidney transplant recipients .","summary":"The article describes a new concept to help match donor organs to recipients for kidney transplantation ., The concept relies on the ability to measure the individual DNA of potential donors and recipients ., When the data about genomes ( i . e . , DNA ) of possible donors and recipients are available , the article describes how data can be computationally compared to identify differences in these genomes and quantify the possible future impact of these differences on the functioning of the graft ., The concept presented in the article determines a score for each pair of possible donor and recipient ., This score is called the allogenomics mismatch score ., The study tested the ability of this score to predict graft function ( the ability of the graft to filter blood ) in the recipient several years after transplantation surgery ., The study found that , in three small sets of patients tested , the score is a strong predictor of graft function ., Prior studies often assumed that only a small number of locations in the genome were most likely to have an impact on graft function , while this study found initial evidence that differences across DNA that code for a large number of proteins can have a combined impact on graft function .","keywords":"urinary system procedures, medicine and health sciences, organ transplantation, immunology, biomarkers, human genomics, surgical and invasive medical procedures, clinical medicine, renal transplantation, genome analysis, kidneys, transplantation, immune system proteins, proteins, creatinine, biochemistry, anatomy, clinical immunology, transplantation immunology, genetics, biology and life sciences, renal system, genomics, computational biology, genomic medicine","toc":null} +{"Unnamed: 0":2265,"id":"journal.pcbi.1002267","year":2011,"title":"Dynamical and Structural Analysis of a T Cell Survival Network Identifies Novel Candidate Therapeutic Targets for Large Granular Lymphocyte Leukemia","sections":"Living cells perceive and respond to environmental perturbations in order to maintain their functional capabilities , such as growth , survival , and apoptosis ., This process is carried out through a cascade of interactions forming complex signaling networks ., Dysregulation ( abnormal expression or activity ) of some components in these signaling networks affects the efficacy of signal transduction and may eventually trigger a transition from the normal physiological state to a dysfunctional system 1 manifested as diseases such as diabetes 2 , 3 , developmental disorders 4 , autoimmunity 5 and cancer 4 , 6 ., For example , the blood cancer T-cell large granular lymphocyte ( T-LGL ) leukemia exhibits an abnormal proliferation of mature cytotoxic T lymphocytes ( CTLs ) ., Normal CTLs are generated to eliminate cells infected by a virus , but unlike normal CTLs which undergo activation-induced cell death after they successfully fight the virus , leukemic T-LGL cells remain long-term competent 7 ., The cause of this abnormal behavior has been identified as dysregulation of a few components of the signal transduction network responsible for activation-induced cell death in T cells 8 ., Network representation , wherein the systems components are denoted as nodes and their interactions as edges , provides a powerful tool for analyzing many complex systems 9 , 10 , 11 ., In particular , network modeling has recently found ever-increasing applications in understanding the dynamic behavior of intracellular biological systems in response to environmental stimuli and internal perturbations 12 , 13 , 14 ., The paucity of knowledge on the biochemical kinetic parameters required for continuous models has called for alternative dynamic approaches ., Among the most successful approaches are discrete dynamic models in which each component is assumed to have a finite number of qualitative states , and the regulatory interactions are described by logical functions 15 ., The simplest discrete dynamic models are the so-called Boolean models that assume only two states ( ON or OFF ) for each component ., These models were originally introduced by S . Kauffman and R . Thomas to provide a coarse-grained description of gene regulatory networks 16 , 17 ., A Boolean network model of T cell survival signaling in the context of T-LGL leukemia was previously constructed by Zhang et al 18 through performing an extensive literature search ., This network consists of 60 components , including proteins , mRNAs , and small molecules ( see Figure 1 ) ., The main input to the network is \u201cStimuli\u201d , which represents virus or antigen stimulation , and the main output node is \u201cApoptosis\u201d , which denotes programmed cell death ., Based on a random order asynchronous Boolean dynamic model of the assembled network , Zhang et al identified a minimal number of dysregulations that can cause the T-LGL survival state , namely overabundance or overactivity of the proteins platelet-derived growth factor ( PDGF ) and interleukin 15 ( IL15 ) ., Zhang et al carried out a preliminary analysis of the networks dynamics by performing numerical simulations starting from one specific initial condition ( corresponding to resting T cells receiving antigen stimulation and over-abundance of the two proteins PDGF and IL15 ) ., Once the known deregulations in T-LGL leukemia were reproduced , each of these deregulations was interrupted individually , by setting the nodes status to the opposite state , to predict key mediators of the disease ., Yet , a complete dynamic analysis of the system , including identification of the attractors ( e . g . steady states ) of the system and their corresponding basin of attraction ( precursor states ) , as well as a thorough perturbation analysis of the system considering all possible initial states , is lacking ., Performing this analysis can provide deeper insights into unknown aspects of T-LGL leukemia ., Stuck-at-ON\/OFF fault is a very common dysregulation of biomolecules in various cancer diseases 19 ., For example , stuck-at-ON ( constitutive activation ) of the RAS protein in the mitogen-activated protein kinase pathways leads to aberrant cell proliferation and cancer 19 , 20 ., Thus identifying components whose stuck-at values result in the clearance , or alternatively , the persistence of a disease is extremely beneficial for the design of intervention strategies ., As there is no known curative therapy for T-LGL leukemia , identification of potential therapeutic targets is of utmost importance 21 ., In this paper , we carry out a detailed analysis of the T-LGL signaling network by considering all possible initial states to probe the long-term behavior of the underlying disease ., We employ an asynchronous Boolean dynamic framework and a network reduction method , which we previously proposed 22 , to identify the attractors of the system and analyze their basins of attraction ., This analysis allows us to confirm or predict the T-LGL states of 54 components of the network ., The predicted state of one of the components ( SMAD ) is validated by new wet-bench experiments ., We then perform node perturbation analysis using the dynamic approach and a structural method proposed in 23 to study to what extent does each component contribute to T-LGL leukemia ., Both methods give consistent results and together identify 19 key components whose disruption can reverse the abnormal state of the signaling network , thereby uncovering potential therapeutic targets for this disease , some of which are also corroborated by experimental evidence ., Boolean models belong to the class of discrete dynamic models in which each node of the network is characterized by an ON ( 1 ) or OFF ( 0 ) state and usually the time variable t is also considered to be discrete , i . e . it takes nonnegative integer values 24 , 25 ., The future state of each node vi is determined by the current states of the nodes regulating it according to a Boolean transfer function , where ki is the number of regulators of vi ., Each Boolean function ( rule ) represents the regulatory relationships between the components and is usually expressed via the logical operators AND , OR and NOT ., The state of the system at each time step is denoted by a vector whose ith component represents the state of node vi at that time step ., The discrete state space of a system can be represented by a state transition graph whose nodes are states of the system and edges are allowed transitions among the states ., By updating the nodes states at each time step , the state of the system evolves over time and following a trajectory of states it eventually settles down into an attractor ., An attractor can be in the form of either a fixed point , in which the state of the system does not change , or a complex attractor , where the system oscillates ( regularly or irregularly ) among a set of states ., The set of states leading to a specific attractor is called the basin of attraction of that attractor ., In order to evaluate the state of each node at a given time instant , synchronous as well as asynchronous updating strategies have been proposed 24 , 25 ., In the synchronous method all nodes of the network are updated simultaneously at multiples of a common time step ., The underlying assumption of this update method is that the timescales of all the processes occurring in a system are similar ., This is a quite strong and potentially unrealistic assumption , which in particular may not be suited for intracellular biological processes due to the variety of timescales associated with transcription , translation and post-translational mechanisms 26 ., To overcome this limitation , various asynchronous methods have been proposed wherein the nodes are updated based on individual timescales 25 , 27 , 28 , 29 , 30 , including deterministic methods with fixed node timescales and stochastic methods such as random order asynchronous method 27 wherein the nodes are updated in random permutations ., In a previous work 22 , we carried out a comparative study of three different asynchronous methods applied to the same biological system ., That study suggested that the general asynchronous ( GA ) method , wherein a randomly selected node is updated at each time step , is the most efficient and informative asynchronous updating strategy ., This is because deterministic asynchronous 22 or autonomous 30 Boolean models require kinetic or timing knowledge , which is usually missing , and random order asynchronous models 27 are not computationally efficient compared to the GA models ., In addition , the superiority of the GA approach has been corroborated by other researchers 29 and the method has been used in other studies as well 31 , 32 ., We thus chose to employ the GA method in this work , and we implemented it using the open-source software library BooleanNet 33 ., It is important to note that the stochasticity inherent to this method may cause each state to have multiple successors , and thus the basins of attraction of different attractors may overlap ., For systems with multiple fixed-point attractors , the absorption probabilities to each fixed point can be computed through the analysis of the Markov chain and transition matrix associated with the state transition graph of the system 34 ., Given a fixed point , node perturbations can be performed by reversing the state of the nodes i . e . by knocking out the nodes that stabilize in an ON state in the fixed point or over-expressing the ones that stabilize in an OFF state ., A Boolean network with n nodes has a total of 2n states ., This exponential dependence makes it computationally intractable to map the state transition graphs of even relatively small networks ., This calls for developing efficient network reduction approaches ., Recent efforts towards addressing this challenge consists of iteratively removing single nodes that do not regulate their own function and simplifying the redundant transfer functions using Boolean algebra 35 , 36 ., Naldi et al 35 proved that this approach preserves the fixed points of the system and that for each ( irregular ) complex attractor in the original asynchronous model there is at least one complex attractor in the reduced model ( i . e . network reduction may create spurious oscillations ) ., Boolean networks often contain nodes whose states stabilize in an attracting state after a transient period , regardless of updating strategy or initial conditions ., The attracting states of these nodes can be readily identified by inspection of their Boolean functions ., In a previous work 22 we proposed a method of network simplification by, ( i ) pinpointing and eliminating these stabilized nodes and, ( ii ) iteratively removing a simple mediator node ( e . g . a node that has one incoming edge and one outgoing edge ) and connecting its input ( s ) to its target ( s ) ., Our simplification method shares similarities with the method proposed in 35 , 36 , with the difference that we only remove stabilized nodes ( which have the same state on every attractor ) and simple mediator nodes rather than eliminating each node without a self loop ., Thus their proof regarding the preservation of the steady states by the reduction method holds true in our case ., We employed this simplification method for the analysis of a signal transduction network in plants and verified by using numerical simulations that it preserves the attractors of that system ., In this work , we employ this reduction method to simplify the T-LGL leukemia signal transduction network synthesized by Zhang et al 18 , thereby facilitating its dynamical analysis ., We also note that the first step of our simplification method is similar to the logical steady state analysis implemented in the software tool CellNetAnalyzer 37 , 38 ., We thus refer to this step as logical steady state analysis throughout the paper ., It should be noted that the fixed points of a Boolean network are the same for both synchronous and asynchronous methods ., In order to obtain the fixed points of a system one can solve the set of Boolean equations independent of time ., To this end , we first fix the state of the source nodes ., We then determine the nodes whose rules depend on the source nodes and will either stabilize in an attracting state after a time delay or otherwise their rules can be simplified significantly by plugging in the state of the source nodes ., Iteratively inserting the states of stabilized nodes in the rules ( i . e . employing logical steady state analysis ) will result in either the fixed point ( s ) of the system , or the partial fixed point ( s ) and a remaining set of equations to be solved ., In the latter case , if the remaining set of equations is too large to obtain its fixed point ( s ) analytically , we take advantage of the second step of our reduction method 22 to simplify the resulting network and to determine a simpler set of Boolean rules ., By solving this simpler set of equations ( or performing numerical simulations , if necessary ) and plugging the solutions into the original rules , we can then find the states of the removed nodes and determine the attractors of the whole system accordingly ., For the analysis of basins of attraction of the attractors , we perform numerical simulations using the GA update method ., The topology ( structure ) and the function of biological networks are closely related ., Therefore , structural analysis of biological networks provides an alternative way to understand their function 39 , 40 ., We have recently proposed an integrative method to identify the essential components of any given signal transduction network 23 ., The starting point of the method is to represent the combinatorial relationship of multiple regulatory interactions converging on a node v by a Boolean rule:where uijs are regulators of node v . The method consists of two main steps ., The first step is the expansion of a signaling network to a new representation by incorporating the sign of the interactions as well as the combinatorial nature of multiple converging interactions ., This is achieved by introducing a complementary node for each component that plays a role in negative regulations ( NOT operation ) as well as introducing a composite node to denote conditionality among two or more edges ( AND operation ) ., This step eliminates the distinction of the edge signs; that is , all directed edges in the expanded network denote activation ., In addition , the AND and OR operators can be readily distinguished in the expanded network , i . e . , multiple edges ending at composite nodes are added by the AND operator , while multiple edges ending at original or complementary nodes are cumulated by the OR operator ., The second step is to model the cascading effects following the loss of a node by an iterative process that identifies and removes nodes that have lost their indispensable regulators ., These two steps allow ranking of the nodes by the effects of their loss on the connectivity between the networks input ( s ) and output ( s ) ., We proposed two connectivity measures in 23 , namely the simple path ( SP ) measure , which counts the number of all simple paths from inputs to outputs , and a graph measure based on elementary signaling modes ( ESMs ) , defined as a minimal set of components that can perform signal transduction from initial signals to cellular responses ., We found that the combinatorial aspects of ESMs pose a substantial obstacle to counting them in large networks and that the SP measure has a similar performance as the ESM measure since both measures incorporate the cascading effects of a nodes removal arising from the synergistic relations between multiple interactions ., Therefore , we employ the SP measure and define the importance value of a component v as:where NSP ( Gexp ) and NSP ( G\u0394v ) denote the total number of simple paths from the input ( s ) to the output ( s ) in the original expanded network Gexp and the damaged network G\u0394v upon disruption of node v , respectively ., This essentiality measure takes values in the interval 0 , 1 , with 1 indicating a node whose loss causes the disruption of all paths between the input and output node ( s ) ., In this paper , we also make use of this structural method to identify essential components of the T-LGL leukemia signaling network ., We then relate the importance value of nodes to the effects of their knockout ( sustained OFF state ) in the dynamic model and the importance value of complementary nodes to the effects of their original nodes constitutive activation ( sustained ON state ) in the dynamic model ., The T-LGL signaling network reconstructed by Zhang et al 18 contains 60 nodes and 142 regulatory edges ., Zhang et al used a two-step process: they first synthesized a network containing 128 nodes and 287 edges by extensive literature search , then simplified it with the software NET-SYNTHESIS 42 , which constructs the sparsest network that maintains all of the causal ( upstream-downstream ) effects incorporated in a redundant starting network ., In this study , we work with the 60-node T-LGL signaling network reported in 18 , which is redrawn in Figure 1 ., The Boolean rules for the components of the network were constructed in 18 by synthesizing experimental observations and for convenience are given in Table S1 as well ., The description of the node names and abbreviations are provided in Table S2 ., To reduce the computational burden associated with the large state space ( more than 1018 states for 60 nodes ) , we simplified the T-LGL network using the reduction method proposed in 22 ( see Materials and Methods ) ., We fixed the six source nodes in the states given in 18 , i . e . Stimuli , IL15 , and PDGF were fixed at ON and Stimuli2 , CD45 , and TAX were fixed at OFF ., We used the Boolean rules constructed in 18 , with one notable difference ., The Boolean rules for all the nodes in 18 , except Apoptosis , contain the expression \u201cAND NOT Apoptosis\u201d , meaning that if Apoptosis is ON , the cell dies and correspondingly all other nodes are turned OFF ., To focus on the trajectory leading to the initial turning on of the Apoptosis node , we removed the \u201cAND NOT Apoptosis\u201d from all the logical rules ., This allows us to determine the stationary states of the nodes in a live cell ., We determined which nodes states stabilize using the first step of our simplification method , i . e . logical steady state analysis ( see Materials and Methods ) ., Our analysis revealed that 36 nodes of the network stabilize in either an ON or OFF state ., In particular , Proliferation and Cytoskeleton signaling , two output nodes of the network , stabilize in the OFF and ON state , respectively ., Low proliferation in leukemic LGL has been observed experimentally 43 , which supports our finding of a long-term OFF state for this output node ., The ON state of Cytoskeleton signaling may not be biologically relevant as this node represents the ability of T cells to attach and move which is expected to be reduced in leukemic T-LGL compared to normal T cells ., The nodes whose stabilized states cannot be readily obtained by inspection of their Boolean rules form the sub-network represented in Figure 2A ., The Boolean rules of these nodes are listed in Table S3 wherein we put back the \u201cAND NOT Apoptosis\u201d expression into the rules ., Next , we identified the attractors ( long-term behavior ) of the sub-network represented in Figure 2A ( see Materials and Methods ) ., We found that upon activation of Apoptosis all other nodes stabilize at OFF , forming the normal fixed point of the system , which represents the normal behavior of programmed cell death ., When Apoptosis is stabilized at OFF , the two nodes in the top sub-graph oscillate while all the nodes in the bottom sub-graph are stabilized at either ON or OFF ., As shown in Figure 3 , the state space of the two oscillatory nodes , TCR and CTLA4 , forms a complex attractor in which the average fraction of ON states for either node is 0 . 5 ., Given that these two nodes have no effect on any other node under the conditions studied here ( i . e . stable states of the source nodes ) , their behavior can be separated from the rest of the network ., The bottom sub-graph exhibits the normal fixed point , as well as two T-LGL ( disease ) fixed points in which Apoptosis is OFF ., The only difference between the two T-LGL fixed points is that the node P2 is ON in one fixed point and OFF in the other , which was expected due to the presence of a self-loop on P2 in Figure 2A ., P2 is a virtual node introduced to mediate the inhibition of interferon-\u03b3 translation in the case of sustained activity of the interferon-\u03b3 protein ( IFNG in Figure 2A ) ., The node IFNG is also inhibited by the node SMAD which stabilizes in the ON state in both T-LGL fixed points ., Therefore IFNG stabilizes at OFF , irrespective of the state of P2 , as supported by experimental evidence 44 ., Thus the biological difference between the two fixed points is essentially a memory effect , i . e . the ON state of P2 indicates that IFNG was transiently ON before stabilizing in the OFF state ., In the two T-LGL fixed points for the bottom sub-graph of Figure 2A , the nodes sFas , GPCR , S1P , SMAD , MCL1 , FLIP , and IAP are ON and the other nodes are OFF ., We found by numerical simulations using the GA method ( see Materials and Methods ) that out of 65 , 536 total states in the state transition graph , 53% are in the exclusive basin of attraction of the normal fixed point , 0 . 24% are in the exclusive basin of attraction of the T-LGL fixed point wherein P2 is ON and 0 . 03% are in the exclusive basin of attraction of the T-LGL fixed point wherein P2 is OFF ., Interestingly , there is a significant overlap among the basins of attraction of all the three fixed points ., The large basin of attraction of the normal fixed point is partly due to the fact that all the states having Apoptosis in the ON state ( that is , half of the total number of states ) belong to the exclusive basin of the normal fixed point ., These states are not biologically relevant initial conditions but they represent potential intermediary states toward programmed cell death and as such they need to be included in the state transition graph ., Since the state transition graph of the bottom sub-graph given in Figure 2A is too large to represent and to further analyze ( e . g . to obtain the probabilities of reaching each of the fixed points ) , we applied the second step of the network reduction method proposed in 22 ., This step preserves the fixed points of the system ( see Materials and Methods ) , and since the only attractors of this sub-graph are fixed points , the state space of the reduced network is expected to reflect the properties of the full state space ., Correspondingly , the nodes having in-degree and out-degree of one ( or less ) in the sub-graph on Figure 2A , such as sFas , MCL1 , IAP , GPCR , SMAD , and CREB , can be safely removed without losing any significant information as such nodes at most introduce a delay in the signal propagation ., In addition , we note that although the node P2 has a self-loop and generates a new T-LGL fixed point as described before , it can also be removed from the network since the two fixed points differ only in the state of P2 and thus correspond to biologically equivalent disease states ., We revisit this node when enumerating the attractors of the original network ., In the resulting simplified network , the nodes BID , Caspase , and IFNG would also have in-degree and out-degree of one ( or less ) and thus can be safely removed as well ., This reduction procedure results in a simple sub-network represented in Figure 2B with the Boolean rules given in Table 1 ., Our attractor analysis revealed that this sub-network has two fixed points , namely 000001 and 110000 ( the digits from left to right represent the state of the nodes in the order as listed from top to bottom in Table 1 ) ., The first fixed point represents the normal state , that is , the apoptosis of CTL cells ., Note that the OFF state of other nodes in this fixed point was expected because of the presence of \u201cAND NOT Apoptosis\u201d in all the Boolean rules ., The second fixed point is the T-LGL ( disease ) one as Apoptosis is stabilized in the OFF state ., We note that the sub-network depicted in Figure 2B contains a backbone of activations from Fas to Apoptosis and two nodes ( S1P and FLIP ) which both have a mutual inhibitory relationship with the backbone ., If activation reaches Apoptosis , the system converges to the normal fixed point ., In the T-LGL fixed point , on the other hand , the backbone is inactive while S1P and FLIP are active ., We found by simulations that for the simplified network of Figure 2B , 56% of the states of the state transition graph ( represented in Figure 4 ) are in the exclusive basin of attraction of the normal fixed point while 5% of the states form the exclusive basin of attraction of the T-LGL fixed point ., Again , the half of state space that has the ON state of Apoptosis belongs to the exclusive basin of attraction of the normal fixed point ., Notably , there is a significant overlap between the basins of attraction of the two fixed points , which is illustrated by a gray color in Figure, 4 . The probabilities of reaching each of the two fixed points starting from these gray-colored states , found by analysis of the corresponding Markov chain ( see Materials and Methods ) , are given in Figure, 5 . As this figure represents , for the majority of cases the probability of reaching the normal fixed point is higher than that of the T-LGL fixed point ., The three states whose probabilities to reach the T-LGL fixed point are greater than or equal to 0 . 7 are one step away either from the T-LGL fixed point or from the states in its exclusive basin of attraction ., In two of them , the backbone of the network in Figure 2B is inactive , and in the third one the backbone is partially inactive and most likely will remain inactive due to the ON state of S1P ( one of the two nodes having mutual inhibition with the backbone ) ., Based on the sub-network analysis and considering the states of the nodes that stabilized at the beginning based on the logical steady state analysis , we conclude that the whole T-LGL network has three attractors , namely the normal fixed point wherein Apoptosis is ON and all other nodes are OFF , representing the normal physiological state , and two T-LGL attractors in which all nodes except two , i . e . TCR and CTLA4 , are in a steady state , representing the disease state ., These T-LGL attractors are given in the second column of Table 2 , which presents the predicted T-LGL states of 54 components of the network ( all but the six source nodes whose state is indicated at the beginning of the Results section ) ., We note that the two T-LGL attractors essentially represent the same disease state since they only differ in the state of the virtual node P2 ., Moreover , this disease state can be considered as a fixed point since only two nodes oscillate in the T-LGL attractors ., For this reason we will refer to this state as the T-LGL fixed point ., It is expected that the basins of attraction of the fixed points have similar features as those of the simplified networks ., Experimental evidence exists for the deregulated states of 36 ( 67% ) components out of the 54 predicted T-LGL states as summarized in the third column of Table 2 ., For example , the stable ON state of MEK , ERK , JAK , and STAT3 indicates that the MAPK and JAK-STAT pathways are activated ., The OFF state of BID is corroborated by recent evidence that it is down-regulated both in natural killer ( NK ) and in T cell LGL leukemia 45 ., In addition , the node RAS was found to be constitutively active in NK-LGL leukemia 41 , which indirectly supports our result on the predicted ON state of this node ., For three other components , namely , GPCR , DISC , and IFNG , which were classified as being deregulated without clear evidence of either up-regulation or down-regulation in 18 , we found that they eventually stabilize at ON , OFF , and OFF , respectively ., The OFF state of IFNG and DISC is indeed supported by experimental evidence 44 , 46 ., In the second column of Table 2 , we indicated with an asterisk the stabilized state of 17 components that were experimentally undocumented before and thus are predictions of our steady state analysis ( P2 was not included as it is a virtual node ) ., We note that ten of these cases were also predicted in 18 by simulations ., The predicted T-LGL states of these 17 components can guide targeted experimental follow-up studies ., As an example of this approach , we tested the predicted over-activity of the node SMAD ( see Materials and Methods ) ., As described in 18 the SMAD node represents a merger of SMAD family members Smad 2 , 3 , and, 4 . Smad 2 and 3 are receptor-regulated signaling proteins which are phosphorylated and activated by type I receptor kinases while Smad4 is an unregulated co-mediator 47 ., Phosphorylated Smad2 and\/or Smad3 form heterotrimeric complexes with Smad4 and these complexes translocate to the nucleus and regulate gene expression ., Thus an ON state of SMAD in the model is a representation of the predominance of phosphorylated Smad2 and\/or phosphorylated Smad3 in T-LGL cells ., In relative terms as compared to normal ( resting or activated ) T cells , the predicted ON state implies a higher level of phosphorylated Smad2\/3 in T-LGL cells as compared to normal T cells ., Indeed , as shown in Figure 6 , T cells of T-LGL patients tend to have high levels of phosphorylated Smad2\/3 , while normal activated T cells have essentially no phosphorylated Smad2\/3 ., Thus our experiments validate the theoretical prediction ., A question of immense biological importance is which manipulations of the T-LGL network can result in consistent activation-induced cell death and the elimination of the dysregulated ( diseased ) behavior ., We can rephrase and specify this question as which node perturbations ( knockouts or constitutive activations ) lead to a system that has only the normal fixed point ., These perturbations can serve as candidates for potential therapeutic interventions ., To this end , we performed node perturbation analysis using both structural and dynamic methods ., In this paper we presented a comprehensive analysis of the T-LGL survival signaling network to unravel the unknown facets of this disease ., By using a reduction technique , we first identified the fixed points of the system , namely the normal and T-LGL fixed points , which represent the healthy and disease states , respectively ., This analysis identified the T-LGL states of 54 components of the network , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions ., These new predictions include RAS , PLCG1 , IAP , TNF , NFAT , GRB2 , FYN , SMAD , P27 , and Cytoskeleton signaling , which are predicted to stabilize at ON in T-LGL leukemia and GAP , SOCS , TRADD , ZAP70 , and CREB which are predicted to stabilize at OFF ., In addition , we found that the node P2 can stabilize in either the ON or OFF state , whereas two nodes , TCR and CTLA4 , oscillate ., We have experimentally validated the prediction that the node SMAD is over-active in leukemic T-LGL by demonstrating the predominant phosphorylation of the SMAD family members Smad2 and Smad3 ., The predicted T-LGL states of other nodes provide valuable guidance for targeted experimental follow-up studies of T-LGL leukemia ., Among the predicted states , the ON state of Cytoskeleton signaling may not be biologically relevant as this node represents the ability of T cells to attach and move which is expected to be reduced in leukemic T-LGL compared to normal T cells ., This discrepancy may be due to the fact that the network contains insufficient det","headings":"Introduction, Materials and Methods, Results, Discussion","abstract":"The blood cancer T cell large granular lymphocyte ( T-LGL ) leukemia is a chronic disease characterized by a clonal proliferation of cytotoxic T cells ., As no curative therapy is yet known for this disease , identification of potential therapeutic targets is of immense importance ., In this paper , we perform a comprehensive dynamical and structural analysis of a network model of this disease ., By employing a network reduction technique , we identify the stationary states ( fixed points ) of the system , representing normal and diseased ( T-LGL ) behavior , and analyze their precursor states ( basins of attraction ) using an asynchronous Boolean dynamic framework ., This analysis identifies the T-LGL states of 54 components of the network , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions ., We further test and validate one of these newly identified states experimentally ., Specifically , we verify the prediction that the node SMAD is over-active in leukemic T-LGL by demonstrating the predominant phosphorylation of the SMAD family members Smad2 and Smad3 ., Our systematic perturbation analysis using dynamical and structural methods leads to the identification of 19 potential therapeutic targets , 68% of which are corroborated by experimental evidence ., The novel therapeutic targets provide valuable guidance for wet-bench experiments ., In addition , we successfully identify two new candidates for engineering long-lived T cells necessary for the delivery of virus and cancer vaccines ., Overall , this study provides a birds-eye-view of the avenues available for identification of therapeutic targets for similar diseases through perturbation of the underlying signal transduction network .","summary":"T-LGL leukemia is a blood cancer characterized by an abnormal increase in the abundance of a type of white blood cell called T cell ., Since there is no known curative therapy for this disease , identification of potential therapeutic targets is of utmost importance ., Experimental identification of manipulations capable of reversing the disease condition is usually a long , arduous process ., Mathematical modeling can aid this process by identifying potential therapeutic interventions ., In this work , we carry out a systematic analysis of a network model of T cell survival in T-LGL leukemia to get a deeper insight into the unknown facets of the disease ., We identify the T-LGL status of 54 components of the system , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions , one of which we validate by follow-up experiments ., By deciphering the structure and dynamics of the underlying network , we identify component perturbations that lead to programmed cell death , thereby suggesting several novel candidate therapeutic targets for future experiments .","keywords":"biology, computational biology, signaling networks","toc":null} +{"Unnamed: 0":1739,"id":"journal.pntd.0005239","year":2017,"title":"Risk mapping of clonorchiasis in the People\u2019s Republic of China: A systematic review and Bayesian geostatistical analysis","sections":"Clonorchiasis is an important food-borne trematodiases in Asia , caused by chronic infection with Clonorchis sinensis 1 , 2 ., Symptoms of clonorchiasis are related to worm burden; ranging from no or mild non-specific symptoms to liver and biliary disorders 3 , 4 ., C . sinensis is classified as a carcinogen 5 , as infection increases the risk of cholangiocarcinoma 6 ., Conservative estimates suggest that around 15 million people were infected with C . sinensis in 2004 , over 85% of whom were concentrated in the People\u2019s Republic of China ( P . R . China ) 6\u20138 ., It has also been estimated that , in 2005 , clonorchiasis caused a disease burden of 275 , 000 disability-adjusted life years ( DALYs ) , though light and moderate infections were excluded from the calculation 9 ., Therefore , two national surveys have been conducted for clonorchiasis in P . R . China; the first national survey was done in 1988\u20131992 and the second national survey in 2001\u20132004 ., Of note , the two surveys used an insensitive diagnostic approach with only one stool sample subjected to a single Kato-Katz thick smear ., The first survey covered 30 provinces\/autonomous regions\/municipalities ( P\/A\/M ) with around 1 . 5 million people screened , and found an overall prevalence of 0 . 37% 10 ., Data from the second survey , which took place in 31 P\/A\/M and screened around 350 , 000 people , showed an overall prevalence of 0 . 58% 7 ., Another dataset in the second national survey is a survey pertaining to clonorchiasis conducted in 27 endemic P\/A\/M using triplicate Kato-Katz thick smears from single stool samples ., The overall prevalence was 2 . 4% , corresponding to 12 . 5 million infected people 8 ., Two main endemic settings were identified; the provinces of Guangdong and Guangxi in the south and the provinces of Heilongjiang and Jilin in the north-east 1 , 2 , 6 ., In the latter setting , the prevalence was especially high in Korean ( minority ) communities ., In general , males showed higher infection prevalence than females and the prevalence increased with age 6 , 8 ., The life cycle of C . sinensis involves specific snails as first intermediate hosts , freshwater fish or shrimp as the second intermediate host , and humans or other piscivorous mammals as definitive hosts , who become infected through consumption of raw or insufficiently cooked infected fish 1 , 2 , 11 , 12 ., Behavioral , environmental , and socioeconomic factors that influence the transmission of C . sinensis or the distribution of the intermediate hosts affect the endemicity of clonorchiasis ., For example , temperature , rainfall , land cover\/usage , and climate change that affect the activities and survival of intermediate hosts , are considered as potential risk factors 13 , 14 ., Socioeconomic factors and consumption of raw freshwater fish are particularly important in understanding the epidemiology of clonorchiasis 15 ., Consumption of raw fish dishes is a deeply rooted cultural practice in some areas of P . R . China , while in other areas it has become popular in recent years , partially explained by perceptions that these dishes are delicious or highly nutritious 1 , 2 , 16 , 17 ., Treatment with praziquantel is one of the most important measures for the management of clonorchiasis , provided to infected individuals or entire at-risk groups through preventive chemotherapy 18 , 19 ., Furthermore , information , education , and communication ( IEC ) , combined with preventive chemotherapy , is suggested for maintaining control sustainability 20 ., Elimination of raw or insufficiently cooked fish or shrimp is an effective way for prevention of infection , but this strategy is difficult to implement due to deeply rooted traditions and perceptions 1 ., Environmental modification is an additional way of controlling clonorchiasis , such as by removing unimproved lavatories built adjacent to fish ponds in endemic areas , thus preventing water contamination by feces 1 , 21 ., Maps displaying where a specific disease occurs are useful to guide prevention and control interventions ., To our knowledge , only a province-level prevalence map of C . sinensis infection is available for P . R . China , while high-resolution , model-based risk estimates based on up-to-date survey data are currently lacking 1 ., Bayesian geostatistical modeling is a rigorous inferential approach to produce risk maps ., The utility of this method has been demonstrated for a host of neglected tropical diseases , such as leishmaniasis , lymphatic filariasis , schistosomiasis , soil-transmitted helminthiasis , and trachoma 22\u201328 ., The approach relies on the qualification of the association between disease risk at observed locations and potential risk factors ( e . g . , environmental and socioeconomic factors ) , thus predicting infection risk in areas without observed data 28 ., Random effects are usually introduced to the regression equation to capture the spatial correlation between locations via a spatially structured Gaussian process 26 ., Here , we compiled available survey data on clonorchiasis in P . R . China , identified important climatic , environmental , and socioeconomic determinants , and developed Bayesian geostatistical models to estimate the risk of C . sinensis infection at high spatial resolution throughout the country ., This work is based on clonorchiasis survey data extracted from the peer-reviewed literature and national surveys in P . R . China ., All data were aggregated and do not contain any information at individual or household levels ., Hence , there are no specific ethical issues that warranted attention ., A systematic review was undertaken in PubMed , ISI Web of Science , China National Knowledge Internet ( CNKI ) , and Wanfang Data from January 1 , 2000 until January 10 , 2016 to identify studies reporting community , village , town , and county-level prevalence data of clonorchiasis in P . R . China ., The search terms were \u201cclonorchi*\u201d ( OR \u201cliver fluke*\u201d ) AND \u201cChina\u201d for Pubmed and ISI Web of Science , and \u201chuazhigaoxichong\u201d ( OR \u201cganxichong\u201d ) for CNKI and Wanfang ., Government reports and other grey literature ( e . g . , MSc and PhD theses , working reports from research groups ) were also considered ., There were no restrictions on language or study design ., County-level data on clonorchiasis collected in 27 endemic P\/A\/M in the second national survey were provided by the National Institute of Parasitic Diseases , Chinese Center for Disease Control and Prevention ( NIPD , China CDC; Shanghai , P . R . China ) ., Titles and abstracts of articles were screened to identify potentially relevant publications ., Full text articles were obtained from seemingly relevant pieces that were screened for C . sinensis infection prevalence data ., Data were excluded if they stemmed from school-based surveys , hospital-based surveys , case-control studies , clinical trials , drug efficacy studies , or intervention studies ( except for baseline or control group data ) ., Studies on clearly defined populations ( e . g . , travellers , military personnel , expatriates , nomads , or displaced or migrating populations ) that were not representative of the general population were also excluded ., We further excluded data based on direct smear or serum diagnostics due to the known low sensitivity or the inability to differentiate between past and active infection , respectively ., All included data were georeferenced and entered into the open-access Global Neglected Tropical Diseases ( GNTDs ) database 29 ., Environmental , socioeconomic , and demographic data were obtained from different accessible data sources ( Table 1 ) ., The data were extracted at the survey locations and at the centroids of a prediction grid with grid cells of 5\u00d75 km spatial resolution ., Land cover data were re-grouped to the following five categories:, ( i ) forests ,, ( ii ) scrublands and grass ,, ( iii ) croplands ,, ( iv ) urban , and, ( v ) wet areas ., They were summarized at each location ( of the survey or grid cell ) by the most frequent category over the period 2001\u20132004 for each pixel of the prediction grid ., Land surface temperature ( LST ) and normalized difference vegetation index ( NDVI ) were averaged annually ., We used human influence index ( HII ) , urban extents , and gross domestic product ( GDP ) per capita as socioeconomic proxies ., The latter was obtained from the P . R . China yearbook full-text database at county-level for the year 2008 and georeferenced for the purpose of our study ., Details about data processing are provided in Lai et al . 26 ., We georeferenced surveys reporting aggregated data at county level by the county centroid and linked them to the average values of our covariates within the specific county ., The mean size of the corresponding counties was around 2 , 000 km2 ., We grouped survey years into two categories ( before 2005 and from 2005 onwards ) ., We selected 2005 as the cutoff year because after the second national survey on important parasitic diseases in 2001\u20132004 , the Chinese government set specific disease control targets and launched a series of control strategies 7 , 30 ., We standardized continuous variables to mean zero and standard deviation one ( SD = 1 ) ., We calculated Pearson\u2019s correlation between continuous variables and dropped one variable among pairs with correlation coefficient greater than 0 . 8 to avoid collinearity , which can lead to wrong parameter estimation 31 ., Researchers have suggested different correlation thresholds of collinearity ranging from 0 . 4 to 0 . 85 31 ., To test the sensitivity of our threshold , we also considered two other thresholds , i . e . , 0 . 5 and 0 . 7 ., Three sets of variables were obtained corresponding to the three thresholds and were used separately in the variable selection procedure ., Furthermore , continuous variables were converted to two- or three-level categorical ones according to preliminary , exploratory , graphical analysis ., We carried out Bayesian variable selection to identify the most important predictors of the disease risk ., In particular , we assumed that the number of positive individuals Yi arises from a binominal distribution Yi\u223cBn ( pi , ni ) , where ni and pi are the number of individuals examined and the probability of infection at location i ( i = 1 , 2 , \u2026 , L ) , respectively ., We modeled the covariates on the logit scale , that is logit ( pi ) =\u03b20+\u2211k=1\u03b2k\u00d7Xi ( k ) , where \u03b2k is the regression coefficient of the kth covariate X ( k ) ., For a covariate in categorical form , \u03b2k is a vector of coefficients {\u03b2kl} , l = 1 , \u2026 , Mk , where Mk is the number of categories , otherwise it has a single element \u03b2k0 ., We followed a stochastic search variable selection approach 32 , and for each predictor X ( k ) we introduced a categorical indicator parameter Ik which takes values j , j = 0 , 1 , 2 with probabilities \u03c0j such that \u03c00 + \u03c01 + \u03c02 = 1 ., Ik = 0 indicates exclusion of the predictor from the model , Ik = 1 indicates inclusion of X ( k ) in linear form and Ik = 2 suggests inclusion in categorical form ., We adopted a mixture of Normal prior distribution for the parameters \u03b2k0 , known as spike and slab prior , proposing a non-informative prior \u03b2k0\u223cN ( 0 , \u03c3B2 ) with probability \u03c01 in case X ( k ) is included in the model ( i . e . , Ik = 1 ) in linear form ( slab ) and an informative prior \u03b2k0\u223cN ( 0 , \u03d10\u03c3B2 ) with probability ( 1 \u2212 \u03c01 ) , shrinking \u03b2k0 to zero ( spike ) if the linear form is excluded from the model ., \u03d10 is a constant , fixed to a small value i . e . , \u03d10 = 0 . 00025 forcing the variance to be close to zero ., In a formal way the above prior is written \u03b2k0\u223c\u03b41 ( Ik ) N ( 0 , \u03c3B2 ) + ( 1\u2212\u03b41 ( Ik ) ) N ( 0 , \u03d10\u03c3B2 ) where \u03b4j ( Ik ) is the Dirac function taking the value 1 if Ik = j and zero otherwise ., Similarly , for the coefficients {\u03b2kl} , l = 1 , \u2026 , Mk corresponding to the categorical form of X ( k ) with Mk categories , we assume that \u03b2kl\u223c\u03b42 ( Ik ) N ( 0 , \u03c3Bl2 ) + ( 1\u2212\u03b42 ( Ik ) ) N ( 0 , \u03d10\u03c3Bl2 ) ., For the inclusion\/exclusion probabilities \u03c0j , we adopt a non-informative Dirichlet prior distribution , i . e . ( \u03c00 , \u03c01 , \u03c02 ) T\u223cDirichlet ( 3 , a ) , a = ( 1 , 1 , 1 ) T ., We also used non-informative inverse gamma prior distributions , IG ( 2 . 01 , 1 . 01 ) for the variance hyperparameters \u03c3B2 and \u03c3Bl2 , l=1 , \u2026 , Mk ., We considered as important , those predictors with posterior inclusion probabilities of \u03c0j greater than 50% ., The above procedure fits all models generated by all combinations of our potential predictors and selects as important those predictors which are included in more than 50% of the models ., Bayesian geostatistical logistic regression models were fitted on C . sinensis survey data to obtain spatially explicit estimates of the infection risk ., The predictors selected from the variable selection procedure were included in the model ., The model extended the previous formulation by including location random effects on the logit scale , that is logit ( pi ) =\u03b20+\u2211k=1\u03b2k\u00d7Xi ( k ) +\u03b5i , where covariate X ( k ) are the predictors ( with functional forms ) that have been identified as important in the variable selection procedure ., We assumed that location-specific random effects \u03b5 = ( \u03b51 , \u2026 , \u03b5L ) T followed a multivariate normal prior distribution \u03b5\u223cMVN ( 0 , \u03a3 ) , with exponential correlation function \u03a3ij=\u03c3sp2exp\\u2061 ( \u2212\u03c1dij ) , where dij is the Euclidean distance between locations , and \u03c1 is the parameter corresponding to the correlation decay ., We also considered non-informative normal prior distributions for the regression coefficient \u03b2kl , l=0 , 1 , \u2026 , Mk , that is \u03b2kl\u223cN ( 0 , 102 ) , an inverse gamma prior distribution for the spatial variance \u03c3sp2\u223cIG ( 2 . 01 , 1 . 01 ) , and a gamma prior for the correlation decay \u03c1\u223cG ( 0 . 01 , 0 . 01 ) ., We estimated the spatial range as the minimum distance with spatial correlation less than 0 . 1 equal to \u2212log ( 0 . 1 ) \/\u03c1 ., We formulated the model in a Bayesian framework and applied Markov chain Monte Carlo ( MCMC ) simulation to estimate the model parameters in Winbugs version 1 . 4 ( Imperial College London and Medical Research Council; London , United Kingdom ) 33 ., We assessed convergence of sampling chains using the Brooks-Gelman-Rubin diagnostic 34 ., We fitted the model on a random subset of 80% of survey locations and used the remaining 20% for model validation ., Mean error and the percentage of observations covered by 95% Bayesian credible intervals ( BCIs ) of posterior predicted prevalence were calculated to access the model performance ., Bayesian kriging was employed to predict the C . sinensis infection risk at the centroids of a prediction grid over P . R . China with grid cells of 5 \u00d7 5 km spatial resolution 35 ., This spatial resolution is often used for estimation of disease risk across large regions as it is a good trade-off between disease control needs and computational burden ., Furthermore , predictions become unreliable when the grid cells have higher resolution than that of the predictors used in the model ., Population-adjusted prevalence ( median and 95% BCI ) for each province was calculated using samples of size 500 from the predictive posterior distribution estimated over the gridded surface ., These samples available for each grid cell were converted to samples from the predictive distribution of the population-adjusted prevalence for each province by multiplying them with the gridded population data , summing them over the grid cells within each province and divided them by the province population ., The samples from the population-adjusted prevalence for each province were summarized by their median and 95% BCI ., Our disease data consist of point-referenced ( village- or town-level ) and areal ( county-level ) data ., Analyses ignoring the areal data may loss valuable information , especially in regions where point-referenced data is sparse ., Here , we assumed a uniform distribution of infection risk within each survey county and treated the areal data as point-referenced data by setting the survey locations as the centroids of the corresponding counties ., To assess the effect of this assumption on our estimates , we simulated data over a number of hypothetical survey locations within the counties and compared predictions based on approaches using the county aggregated data together with the data at individual georeferenced survey locations and using the data at individual georeferenced survey locations only ( excluded the county aggregated data ) ., The former approach gave substantially better disease risk prediction compared to the later one ., The methodology for the simulation study and its results are presented in Supplementary Information S1 Text and S1 Fig , respectively ., A data selection flow chart for the systematic review is presented in Fig, 1 . We identified 7 , 575 records through the literature search and obtained one additional report provided by NIPD , China CDC ( Shanghai , P . R . China ) ., According to our inclusion and exclusion criteria , we obtained 143 records for the final analysis , resulting in 691 surveys for C . sinensis at 633 unique locations published from 2000 onwards ., A summary of our survey data , stratified by province , is provided in Table, 2 . The geographic distribution of locations and observed C . sinensis prevalence are shown in Fig 2B ., We obtained data from all provinces except Inner Mongolia , Ningxia , Qinghai , and Tibet ., We collected more than 50 surveys in Guangdong , Guangxi , Hunan , and Jiangsu provinces ., Over 45% of surveys were conducted from 2005 onwards ., Around 90% of surveys used the Kato-Katz technique for diagnosis , while 0 . 14% of surveys had no information on the diagnostic technique employed ., The overall raw prevalence , calculated as the total number of people infected divided by the total number of people examined from all observed surveys , was 9 . 7% ., We considered a total of 12 variables ( i . e . , land cover , urban extents , precipitation , GDP per capita , HII , soil moisture , elevation , LST in the daytime , LST at night , NDVI , distance to the nearest open water bodies , and pH in water ) for Bayesian variable selection ., Elevation , NDVI , distance to the nearest open water bodies , and land cover were selected for the final geostatistical logistic regression model ., The variables that were selected via the Bayesian variable selection method are listed in Supporting Information S1 Table ., The list was not affected by the collinearity threshold ( i . e . , 0 . 5 , 0 . 7 , and 0 . 8 ) we have considered ., The parameter estimates arising from the geostatistical model fit are shown in Table, 3 . The infection risk of C . sinensis was higher from 2005 onwards than that before 2005 ., Elevation had a negative effect on infection risk ., People living at distance between 2 . 5 and 7 . 0 km from the nearest open water bodies had a lower risk compared to those living in close proximity ( <2 . 5 km ) ., The risk of C . sinensis infection was lower in areas covered by forest , shrub , and grass compared to crop ., Furthermore , NDVI was positively correlated with the risk of C . sinensis infection ., Model validation indicated that the Bayesian geostatistical logistic regression models were able to correctly estimate ( within a 95% BCI ) 71 . 7% of locations for C . sinensis ., The mean error was -0 . 07% , suggesting that our model may slightly over-estimate the infection risk of C . sinensis ., Fig 2A shows the model-based predicted risk map of C . sinensis for P . R . China ., High prevalence ( \u226520% ) was estimated in some areas of southern and northeastern parts of Guangdong province , southwestern and northern parts of Guangxi province , southwestern part of Hunan province , the western part of bordering region of Heilongjiang and Jilin provinces , and the eastern part of Heilongjiang province ., Most regions of northwestern P . R . China and eastern coastal areas had zero to very low prevalence ( <0 . 01% ) ., The prediction uncertainty is shown in Fig 2C ., Table 4 reports the population-adjusted predicted prevalence and the number of individuals infected with C . sinensis in P . R . China , stratified by province , based on gridded population of 2010 ., The overall population-adjusted predicted prevalence of clonorchiasis was 1 . 18% ( 95% BCI: 1 . 10\u20131 . 25% ) in 2010 , corresponding to 14 . 8 million ( 95% BCI: 13 . 8\u201315 . 8 million ) infected individuals ., The three provinces with the highest infection risk were Heilongjiang ( 7 . 21% , 95% BCI: 5 . 95\u20138 . 84% ) , Guangdong ( 6 . 96% , 95% BCI: 6 . 62\u20137 . 27% ) , and Guangxi ( 5 . 52% , 95% BCI: 4 . 97\u20136 . 06% ) ., Provinces with very low risk estimates ( median predicted prevalence < 0 . 01% ) were Gansu , Ningxia , Qinghai , Shanghai , Shanxi , Tibet , and Yunnan ., Guangdong , Heilongjiang , and Guangxi were the top three provinces with the highest number of people infected: 6 . 34 million ( 95% BCI: 6 . 03\u20136 . 62 million ) , 3 . 05 million ( 2 . 52\u20133 . 74 million ) , and 2 . 08 million ( 1 . 87\u20132 . 28 million ) , respectively ., To our knowledge , we present the first model-based , high-resolution estimates of C . sinensis infection risk in P . R . China ., Risk maps were produced through Bayesian geostatistical modeling of clonorchiasis survey data from 2000 onwards , readily adjusting for environmental\/climatic predictors ., Our methodology is based on a rigorous approach for spatially explicit estimation of neglected tropical disease risk 27 ., Surveys pertaining to prevalence of C . sinensis in P . R . China were obtained through a systematic review in both Chinese and worldwide scientific databases to obtain published work from 2000 onwards ., Additional data were provided by the NIPD , China CDC ., We estimated that 14 . 8 million ( 95% BCI: 13 . 8\u201315 . 8 million; 1 . 18% ) people in P . R . China were infected with C . sinensis in 2010 , which is almost 20% higher than the previous estimates of 12 . 5 million people for the year 2004 , based on empirical analysis of data from a large survey of clonorchiasis conducted from 2002\u20132004 in 27 endemic P\/A\/M ., The mean error for the model validation was slightly smaller than zero , suggesting that our model might somewhat over-estimate the true prevalence of clonorchiasis ., The overall raw prevalence of the observed data was 9 . 7% ., This can be an over-estimation of the overall prevalence as many surveys were likely to have been conducted in places with relatively high infection risk ( preferential sampling ) ., Our population-adjusted , model-based estimates was much lower ( 1 . 18% , 95% BCI: 1 . 10\u20131 . 25% ) and it should reflect the actual situation because it takes into account the distribution of the population and of the disease risk across the country ., Indeed , geostatistical models get their predictive strength from regions with large amount of data that allow more accurate estimation of the relation between the disease risk and its predictors , therefore they are the most powerful statistical tools for predicting the disease risk in areas with sparse data ., Still , the estimates in regions with scarce data should be interpreted with caution ., However , even though our data did not include surveys from four provinces ( Inner Mongolia , Ningxia , Qinghai , and Tibet ) , our model obtained low or zero prevalence estimates which are consistent with data summaries of the second national survey aggregated at provincial level for these four provinces 7 ., On the other hand , our model may overestimate the overall infection risk for Heilongjiang province , as the high risk areas in the southeastern and southwestern parts of the province may influence the prediction in the northern part , where no observed data were available ., We found an increase of infection risk of C . sinensis for the period from 2005 onwards , which may be due to several reasons , including higher consumption of raw fish , lack of self-protection awareness of food hygiene , low health education , and rapid growth of aquaculture 13 , 36 ., Consumption of raw freshwater fish is related to C . sinensis infection risk 15 , 37 , however , such information is unavailable for P . R . China ., Elevation was one of the most important predictors in our model ., Different elevation levels correspond to different environmental\/climatic conditions that can influence the distribution of intermediate host snails ., Our results show a positive association of NDVI and the prevalence of C . sinensis ., We found that distance to the nearest water bodies was significantly related to infection risk ., Traditionally , areas adjacent to water bodies were reported to have a higher prevalence of C . sinensis , however , due to improvement of trade and transportation channels , this situation may be changing , which may explain our result showing a non-linear relationship between distance to nearest water bodies and infection risk 2 , 13 ., Furthermore , our analysis supports earlier observations , suggesting an association between land cover type and infection risk 13 , 14 ., Interestingly , the risk of infection with other neglected tropical diseases , such as soil-transmitted helminthiasis and schistosomiasis , has declined in P . R . China over the past 10\u201315 years due to socioeconomic development and large-scale interventions 38 ., However , clonorchiasis , the major food-borne trematodiases in P . R . China , shows an increased risk in recent years , which indicates the Chinese government needs to pay more attention to this disease ., Several areas with high infection risk in P . R . China are indicated ( Supporting Information S2 Fig ) , where control strategies should be focused ., The recommended treatment guidelines for clonorchiasis of the WHO advocate praziquantel administration for all residents every year in high endemic areas ( prevalence \u226520% ) and for all residents every two years or individuals regularly eating raw fish every year in moderate endemic areas ( prevalence <20% ) 19 ., As re-infection or super-infection is common in heavy endemic areas , repeated preventive chemotherapy is necessary to interrupt transmission 18 ., On the other hand , to maintain control sustainability , a comprehensive control strategy must be implemented , including IEC , preventive chemotherapy , and improvement of sanitation 20 , 21 ., Through IEC , residents may conscientiously reduce or stop consumption of raw fish ., Furthermore , by removing unimproved latrines around fish ponds , the likelihood of fish becoming infected with cercariae declines 39 ., A successful example of comprehensive control strategies is Shangdong province , where clonorchiasis was endemic , but after rigorous implementation of comprehensive control programs for more than 10 years , the disease has been well controlled 40 ., The Chinese Ministry of Health set a goal to halve the prevalence of clonorchiasis ( compared to that observed in the second national survey in 2001\u20132004 ) in highly endemic areas by 2015 using integrated control measures 30 ., In practice , control measures are carried out in endemic villages or counties with available survey data ., However , large-scale control activities are lacking in most endemic provinces , as control plans are difficult to make when the epidemiology is only known at provincial level 41 ., Our high-resolution infection risk estimates provide important information for targeted control ., Our analysis is based on historical survey data compiled from studies that may differ in study design , diagnostic methods and distribution of age groups ., As more than 90% of surveys applied Kato-Katz as diagnostic method , we assumed similar diagnostic sensitivity across all surveys ., However , the sensitivity may vary in space as a function of infection intensity ., Most of the survey data are aggregated over age groups , thus we could not obtain age-specific risk estimates ., Moreover , bias might occur when age distribution in survey population differ across locations as different age group may have different infection risk ., In conclusion , we present the first model-based , high-resolution risk estimates of C . sinensis infection in P . R . China , and identified areas of high priority for control ., Our findings show an increased risk from 2005 onwards , suggesting that the government should put more efforts on control activities of clonorchiasis in P . R . China .","headings":"Introduction, Methods, Results, Discussion","abstract":"Clonorchiasis , one of the most important food-borne trematodiases , affects more than 12 million people in the People\u2019s Republic of China ( P . R . China ) ., Spatially explicit risk estimates of Clonorchis sinensis infection are needed in order to target control interventions ., Georeferenced survey data pertaining to infection prevalence of C . sinensis in P . R . China from 2000 onwards were obtained via a systematic review in PubMed , ISI Web of Science , Chinese National Knowledge Internet , and Wanfang Data from January 1 , 2000 until January 10 , 2016 , with no restriction of language or study design ., Additional disease data were provided by the National Institute of Parasitic Diseases , Chinese Center for Diseases Control and Prevention in Shanghai ., Environmental and socioeconomic proxies were extracted from remote-sensing and other data sources ., Bayesian variable selection was carried out to identify the most important predictors of C . sinensis risk ., Geostatistical models were applied to quantify the association between infection risk and the predictors of the disease , and to predict the risk of infection across P . R . China at high spatial resolution ( over a grid with grid cell size of 5\u00d75 km ) ., We obtained clonorchiasis survey data at 633 unique locations in P . R . China ., We observed that the risk of C . sinensis infection increased over time , particularly from 2005 onwards ., We estimate that around 14 . 8 million ( 95% Bayesian credible interval 13 . 8\u201315 . 8 million ) people in P . R . China were infected with C . sinensis in 2010 ., Highly endemic areas ( \u2265 20% ) were concentrated in southern and northeastern parts of the country ., The provinces with the highest risk of infection and the largest number of infected people were Guangdong , Guangxi , and Heilongjiang ., Our results provide spatially relevant information for guiding clonorchiasis control interventions in P . R . China ., The trend toward higher risk of C . sinensis infection in the recent past urges the Chinese government to pay more attention to the public health importance of clonorchiasis and to target interventions to high-risk areas .","summary":"Clonorchiasis is an important food-borne trematodiases and it has been estimated that more than 12 million people in China are affected ., Precise information on where the disease occurs can help to identify priority areas for where control interventions should be implemented ., We collected data from recent surveys on clonorchiasis and applied Bayesian geostatistical models to produce model-based , high-resolution risk maps for clonorchiasis in China ., We found an increasing trend of infection risk from 2005 onwards ., We estimated that approximately 14 . 8 million people in China were infected with Clonorchis sinensis in 2010 ., Areas where the high prevalence of C . sinensis was predicted were concentrated in the provinces of Guangdong , Guangxi , and Heilongjiang ., Our results suggest that the Chinese government should pay more attention on the public health importance of clonorchiasis and that specific control efforts should be implemented in high-risk areas .","keywords":"invertebrates, medicine and health sciences, helminths, china, tropical diseases, geographical locations, vertebrates, parasitic diseases, animals, simulation and modeling, trematodes, freshwater fish, clonorchis sinensis, foodborne trematodiases, probability distribution, mathematics, neglected tropical diseases, infectious disease control, research and analysis methods, infectious diseases, fishes, flatworms, clonorchiasis, research assessment, probability theory, people and places, helminth infections, asia, clonorchis, systematic reviews, biology and life sciences, physical sciences, organisms","toc":null} +{"Unnamed: 0":2433,"id":"journal.pcbi.1005331","year":2017,"title":"Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks","sections":"In the life sciences , the abundance of experimental data is rapidly increasing due to the advent of novel measurement devices ., Genome and transcriptome sequencing , proteomics and metabolomics provide large datasets 1 at a steadily decreasing cost ., While these genome-scale datasets allow for a variety of novel insights 2 , 3 , a mechanistic understanding on the genome scale is limited by the scalability of currently available computational methods ., For small- and medium-scale biochemical reaction networks mechanistic modeling contributed greatly to the comprehension of biological systems 4 ., Ordinary differential equation ( ODE ) models are nowadays widely used and a variety of software tools are available for model development , simulation and statistical inference 5\u20137 ., Despite great advances during the last decade , mechanistic modeling of biological systems using ODEs is still limited to processes with a few dozens biochemical species and a few hundred parameters ., For larger models rigorous parameter inference is intractable ., Hence , new algorithms are required for massive and complex genomic datasets and the corresponding genome-scale models ., Mechanistic modeling of a genome-scale biochemical reaction network requires the formulation of a mathematical model and the inference of its parameters , e . g . reaction rates , from experimental data ., The construction of genome-scale models is mostly based on prior knowledge collected in databases such as KEGG 8 , REACTOME 9 and STRING 10 ., Based on these databases a series of semi-automatic methods have been developed for the assembly of the reaction graph 11\u201313 and the derivation of rate laws 14 , 15 ., As model construction is challenging and as the information available in databases is limited , in general , a collection of candidate models can be constructed to compensate flaws in individual models 16 ., For all these model candidates the parameters have to be estimated from experimental data , a challenging and usually ill-posed problem 17 ., To determine maximum likelihood ( ML ) and maximum a posteriori ( MAP ) estimates for model parameters , high-dimensional nonlinear and non-convex optimization problems have to be solved ., The non-convexity of the optimization problem poses challenges , such as local minima , which have to be addressed by the selection of optimization methods ., Commonly used global optimization methods are multi-start local optimization 18 , evolutionary and genetic algorithms 19 , particle swarm optimizers 20 , simulated annealing 21 and hybrid optimizers 22 , 23 ( see 18 , 24\u201326 for a comprehensive survey ) ., For ODE models with a few hundred parameters and state variables multi-start local optimization methods 18 and related hybrid methods 27 have proven to be successful ., These optimization methods use the gradient of the objective function to establish fast local convergence ., While the convergence of gradient based optimizers can be significantly improved by providing exact gradients ( see e . g . 18 , 28 , 29 ) , the gradient calculation is often the computationally most demanding step ., The gradient of the objective function is usually approximated by finite differences ., As this method is neither numerically robust nor computationally efficient , several parameter estimation toolboxes employ forward sensitivity analysis ., This decreases the numerical error and computation time 18 ., However , the dimension of the forward sensitivity equations increases linearly with both the number of state variables and parameters , rendering its application for genome-scale models problematic ., In other research fields such as mathematics and engineering , adjoint sensitivity analysis is used for parameter estimation in ordinary and partial differential equation models ., Adjoint sensitivity analysis is known to be superior to the forward sensitivity analysis when the number of parameters is large 30 ., Adjoint sensitivity analysis has been used for inference of biochemical reaction networks 31\u201333 ., However , the methods were never picked up by the systems and computational biology community , supposedly due to the theoretical complexity of adjoint methods , a missing evaluation on a set of benchmark models , and an absence of an easy-to-use toolbox ., In this manuscript , we provide an intuitive description of adjoint sensitivity analysis for parameter estimation in genome-scale biochemical reaction networks ., We describe the end value problem for the adjoint state in the case of discrete-time measurement and provide an user-friendly implementation to compute it numerically ., The method is evaluated on seven medium- to large-scale models ., By using adjoint sensitivity analysis , the computation time for calculating the objective function gradient becomes effectively independent of the number of parameters with respect to which the gradient is evaluated ., Furthermore , for large-scale models adjoint sensitivity analysis can be multiple orders of magnitude faster than other gradient calculation methods used in systems biology ., The reduction of the time for gradient evaluation is reflected in the computation time of the optimization ., This renders parameter estimation for large-scale models feasible on standard computers , as we illustrate for a comprehensive kinetic model of ErbB signaling ., We consider ODE models for biochemical reaction networks ,, x \u02d9 = f ( x , \u03b8 ) , x ( t 0 ) = x 0 ( \u03b8 ) , ( 1 ), in which x ( t , \u03b8 ) \u2208 R n x is the concentration vector at time t and \u03b8 \u2208 R n \u03b8 denotes the parameter vector ., Parameters are usually kinetic constants , such as binding affinities as well as synthesis , degradation and dimerization rates ., The vector field f : R n x \u00d7 R n \u03b8 \u21a6 R n x describes the temporal evolution of the concentration of the biochemical species ., The mapping x 0 : R n \u03b8 \u21a6 R n x provides the parameter dependent initial condition at time t0 ., As available experimental techniques usually do not provide measurements of the concentration of all biochemical species , we consider the output map h : R n x \u00d7 R n \u03b8 \u21a6 R n y ., This map models the measurement process , i . e . the dependence of the output ( or observables ) y ( t , \u03b8 ) \u2208 R n y at time point t on the state variables and the parameters ,, y ( t , \u03b8 ) = h ( x ( t , \u03b8 ) , \u03b8 ) ., ( 2 ), The i-th observable yi can be the concentration of a particular biochemical species ( e . g . yi = xl ) as well as a function of several concentrations and parameters ( e . g . yi = \u03b8m ( xl1 + xl2 ) ) ., We consider discrete-time , noise corrupted measurements, y \u00af i j = y i ( t j , \u03b8 ) + \u03f5 i j , \u03f5 i j \u223c N ( 0 , \u03c3 i j 2 ) , ( 3 ), yielding the experimental data D = { ( ( y \u00af i j ) i = 1 n y , t j ) } j = 1 N . The number of time points at which measurements have been collected is denoted by N . Remark: For simplicity of notation we assume throughout the manuscript that the noise variances , \u03c3 i j 2 , are known and that there are no missing values ., However , the methods we will present in the following as well as the respective implementations also work when this is not the case ., For details we refer to the S1 Supporting Information ., We estimate the unknown parameter \u03b8 from the experimental data D using ML estimation ., Parameters are estimated by minimizing the negative log-likelihood , an objective function indicating the difference between experiment and simulation ., In the case of independent , normally distributed measurement noise with known variances the objective function is given by, J ( \u03b8 ) = 1 2 \u2211 i = 1 n y \u2211 j = 1 N y \u00af i j - y i ( t j , \u03b8 ) \u03c3 i j 2 , ( 4 ), where yi ( tj , \u03b8 ) is the value of the output computed from Eqs ( 1 ) and ( 2 ) for parameter value \u03b8 ., The minimization ,, \u03b8 * = arg min \u03b8 \u2208 \u0398 J ( \u03b8 ) , ( 5 ), of this weighted least squares J yields the ML estimate of the parameters ., The optimization problem Eq ( 5 ) is in general nonlinear and non-convex ., Thus , the objective function can possess multiple local minima and global optimization strategies need to be used ., For ODE models multi-start local optimization has been shown to perform well 18 ., In multi-start local optimization , independent local optimization runs are initialized at randomly sampled initial points in parameter space ., The individual local optimizations are run until the stopping criteria are met and the results are collected ., The collected results are visualized by sorting them according to the final objective function value ., This visualization reveals local optima and the size of their basin of attraction ., For details we refer to the survey by Raue et al . 18 ., In this study , initial points are generated using latin hypercube sampling and local optimization is performed using the interior point and the trust-region-reflective algorithm implemented in the MATLAB function fmincon . m ., Gradients are computed using finite differences , forward sensitivity analysis or adjoint sensitivity analysis ., A n\u00e4ive approximation to the gradient of the objective function with respect to \u03b8k is obtained by finite differences ,, \u2202 J \u2202 \u03b8 k \u2248 J ( \u03b8 + a e k ) - J ( \u03b8 - b e k ) a + b , ( 6 ), with a , b \u2265 0 and the kth unit vector ek ., In practice forward differences ( a = \u03f5 , b = 0 ) , backward differences ( a = 0 , b = \u03f5 ) and central differences ( a = \u03f5 , b = \u03f5 ) are widely used ., For the computation of forward finite differences , this yields a procedure with three steps: In theory , forward and backward differences provide approximations of order \u03f5 while central differences provide more accurate approximations of order \u03f52 , provided that J is sufficiently smooth ., In practice the optimal choice of a and b depends on the accuracy of the numerical integration 18 ., If the integration accuracy is high , an accurate approximation of the gradient can be achieved using a , b \u226a 1 . For lower integration accuracies , larger values of a and b usually yield better approximations ., A good choice of a and b is typically not clear a priori ( cf . 34 and the references therein ) ., The computational complexity of evaluating gradients using finite differences is affine linear in the number of parameters ., Forward and backward differences require in total n\u03b8 + 1 function evaluations ., Central differences require in total 2n\u03b8 function evaluations ., As already a single simulation of a large-scale model is time-consuming , the gradient calculation using finite differences can be limiting ., State-of-the-art systems biology toolboxes , such as the MATLAB toolbox Data2Dynamics 7 , use forward sensitivity analysis for gradient evaluation ., The gradient of the objective function is, \u2202 J \u2202 \u03b8 k = \u2211 i = 1 n y \u2211 j = 1 N y \u00af i j - y i ( t j , \u03b8 ) \u03c3 i j 2 s i , k y ( t j ) , ( 7 ), with s i , k y ( t ) : t 0 , t N \u21a6 R denoting the sensitivity of output yi at time point t with respect to parameter \u03b8k ., Governing equations for the sensitivities are obtained by differentiating Eqs ( 1 ) and ( 2 ) with respect to \u03b8k and reordering the derivatives ., This yields, s \u02d9 k x = \u2202 f \u2202 x s k x + \u2202 f \u2202 \u03b8 k , s k x ( t 0 ) = \u2202 x 0 \u2202 \u03b8 k s i , k y = \u2202 h i \u2202 x s k x + \u2202 h i \u2202 \u03b8 k ( 8 ), with s k x ( t ) : t 0 , t N \u21a6 R n x denoting the sensitivity of the state x with respect to \u03b8k ., Note that here and in the following , the dependencies of f , h , x0 and their ( partial ) derivatives on t , x and \u03b8 are not stated explicitly but have the to be assumed ., For a more detailed presentation we refer to the S1 Supporting Information Section 1 . Forward sensitivity analysis consists of three steps: Step 1 and 2 are often combined , which enables simultaneous error control and the reuse of the Jacobian 30 ., The simultaneous error control allows for the calculation of accurate and reliable gradients ., The reuse of the Jacobian improves the computational efficiency ., The number of state and output sensitivities increases linearly with the number of parameters ., While this is unproblematic for small- and medium-sized models , solving forward sensitivity equations for systems with several thousand state variable bears technical challenges ., Code compilation can take multiple hours and require more memory than what is available on standard machines ., Furthermore , while forward sensitivity analysis is usually faster than finite differences , in practice the complexity still increases roughly linearly with the number of parameters ., In the numerics community , adjoint sensitivity analysis is frequently used to compute the gradients of a functional with respect to the parameters if the function depends on the solution of a differential equation 35 ., In contrast to forward sensitivity analysis , adjoint sensitivity analysis does not rely on the state sensitivities s k x ( t ) but on the adjoint state p ( t ) ., The calculation of the objective function gradient using adjoint sensitivity analysis consists of three steps: Step 1 and 2 , which are usually the computationally intensive steps , are independent of the parameter dimension ., The complexity of Step 3 increases linearly with the number of parameters , yet the computation time required for this step is typically negligible ., The calculation of state and output trajectories ( Step 1 ) is standard and does not require special methods ., The non-trivial element in adjoint sensitivity analysis is the calculation of the adjoint state p ( t ) \u2208 R n x ( Step 2 ) ., For discrete-time measurements\u2014the usual case in systems and computational biology\u2014the adjoint state is piece-wise continuous in time and defined by a sequence of backward differential equations ., For t > tN , the adjoint state is zero , p ( t ) = 0 . Starting from this end value the trajectory of the adjoint state is calculated backwards in time , from the last measurement t = tN to the initial time t = t0 ., At the time points at which measurements have been collected , tN , \u2026 , t1 , the adjoint state is reinitialised as, p ( t j ) = lim t \u2192 t j + p ( t ) + \u2211 i = 1 n y \u2202 h i \u2202 x T y \u00af i j - y i ( t j ) \u03c3 i j 2 , ( 9 ), which usually results in a discontinuity of p ( t ) at tj ., Starting from the end value p ( tj ) as defined in Eq ( 9 ) the adjoint state evolves backwards in time until the next measurement point tj\u22121 or the initial time t0 is reached ., This evolution is governed by the time-dependent linear ODE, p \u02d9 = - \u2202 f \u2202 x T p ., ( 10 ), The repeated evaluation of Eqs ( 9 ) and ( 10 ) until t = t0 yields the trajectory of the adjoint state ., Given this trajectory , the gradient of the objective function with respect to the individual parameters is, \u2202 J \u2202 \u03b8 k = - \u222b t 0 t N p T \u2202 f \u2202 \u03b8 k d t - \u2211 i , j \u2202 h i \u2202 \u03b8 k y \u00af i j - y i ( t j ) \u03c3 i j 2 - p ( t 0 ) T \u2202 x 0 \u2202 \u03b8 k ., ( 11 ), Accordingly , the availability of the adjoint state simplifies the calculation of the objective function to n\u03b8 one-dimensional integration problems over short time intervals whose union is the total time interval t0 , tN ., Algorithm 1: Gradient evaluation using adjoint sensitivity analysis % State and output Step 1 Compute state and output trajectories using Eqs ( 1 ) and ( 2 ) ., % Adjoint state Step 2 . 1 Set end value for adjoint state , \u2200t > tN: p ( t ) = 0 . for j = N to 1 do Step 2 . 2 Compute end value for adjoint state according to the jth measurement using Eq ( 9 ) ., Step 2 . 3 Compute trajectory of adjoint state on time interval t = ( tj\u22121 , tj by solving Eq ( 10 ) ., end % Objective function gradient for k = 1 to n\u03b8 do Step 3 Evaluation of the sensitivity \u2202J\/\u2202\u03b8k using Eq ( 11 ) ., end Pseudo-code for the calculation of the adjoint state and the objective function gradient is provided in Algorithm 1 . We note that in order to use standard ODE solvers the end value problem Eq ( 10 ) can be transformed in an initial value problem by applying the time transformation \u03c4 = tN \u2212 t ., The derivation of the adjoint sensitivities for discrete-time measurements is provided in the S1 Supporting Information Section 1 . The key difference of the adjoint compared to the forward sensitivity analysis is that the derivatives of the state and the output trajectory with respect to the parameters are not explicitly calculated ., Instead , the sensitivity of the objective function is directly computed ., This results in practice in a computation time of the gradient which is almost independent of the number of parameters ., A visual summary of the different sensitivity analysis methods is provided in Fig 1 . Besides the procedures also the computational complexity is indicated ., The implementation of adjoint sensitivity analysis is non-trivial and error-prone ., To render this method available to the systems and computational biology community , we implemented the Advanced Matlab Interface for CVODES and IDAS ( AMICI ) ., This toolbox allows for a simple symbolic definition of ODE models ( 1 ) and ( 2 ) as well as the automatic generation of native C code for efficient numerical simulation ., The compiled binaries can be executed from MATLAB for the numerical evaluation of the model and the objective function gradient ., Internally , the SUNDIALS solvers suite is employed 30 , which offers a broad spectrum of state-of-the-art numerical integration of differential equations ., In addition to the standard functionality of SUNDIALS , our implementation allows for parameter and state dependent discontinuities ., The toolbox and a detailed documentation can be downloaded from http:\/\/ICB-DCM . github . io\/AMICI\/ ., For the comparison of different gradient calculation methods , we consider a set of standard models from the Biomodels Database 37 and the BioPreDyn benchmark suite 27 ., From the biomodels database we considered models for the regulation of insulin signaling by oxidative stress ( BM1 ) 38 , the sea urchin endomesoderm network ( BM2 ) 39 , and the ErbB sigaling pathway ( BM3 ) 40 ., From BioPreDyn benchmark suite we considered models for central carbon metabolism in E . coli ( B2 ) 41 , enzymatic and transcriptional regulation of carbon metabolism in E . coli ( B3 ) 42 , metabolism of CHO cells ( B4 ) 43 , and signaling downstream of EGF and TNF ( B5 ) 44 ., Genome-wide kinetic metabolic models of S . cerevisiae and E . coli ( B1 ) 45 contained in the BioPreDyn benchmark suite and the Biomodels Database 15 , 45 were disregarded due to previously reported numerical problems 27 , 45 ., The considered models possess 18-500 state variable and 86-1801 parameters ., A comprehensive summary regarding the investigated models is provided in Table 1 ., To obtain realistic simulation times for adjoint sensitivities realistic experimental data is necessary ( see S1 Supporting Information Section 3 ) ., For the BioPreDyn models we used the data provided in the suite , for the ErbB signaling pathway we used the experimental data provided in the original publication and for the remaining models we generated synthetic data using the nominal parameter provided in the SBML definition ., In the following , we will compare the performance of forward and adjoint sensitivities for these models ., As the model of ErbB signaling has the largest number of state variables and is of high practical interest in the context of cancer research , we will analyze the scalability of finite differences and forward and adjoint sensitivity analysis for this model in greater detail ., Moreover , we will compare the computational efficiency of forward and adjoint sensitivity analysis for parameter estimation for the model of ErbB signaling ., The evaluation of the objective function gradient is the computationally demanding step in deterministic local optimization ., For this reason , we compared the computation time for finite differences , forward sensitivity analysis and adjoint sensitivity analysis and studied the scalability of these approaches at the nominal parameter \u03b80 which was provided in the SBML definitions of the investigated models ., For the comprehensive model of ErbB signaling we found that the computation times for finite differences and forward sensitivity analysis behave similarly ( Fig 2a ) ., As predicted by the theory , for both methods the computation time increased linearly with the number of parameters ., Still , forward sensitivities are computationally more efficient than finite differences , as reported in previous studies 18 ., Adjoint sensitivity analysis requires the solution to the adjoint problem , independent of the number of parameters ., For the considered model , solving the adjoint problem a single time takes roughly 2-3-times longer than solving the forward problem ., Accordingly , adjoint sensitivity analysis with respect to a small number of parameter is disadvantageous ., However , adjoint sensitivity analysis scales better than forward sensitivity analysis and finite differences ., Indeed , the computation time for adjoint sensitivity analysis is almost independent of the number of parameters ., While computing the sensitivity with respect to a single parameter takes on average 10 . 09 seconds , computing the sensitivity with respect to all 219 parameters takes merely 14 . 32 seconds ., We observe an average increase of 1 . 9 \u22c5 10\u22122 seconds per additional parameter for adjoint sensitivity analysis which is significantly lower than the expected 3 . 24 seconds for forward sensitivity analysis and 4 . 72 seconds for finite differences ., If the sensitivities with respect to more than 4 parameters are required , adjoint sensitivity analysis outperforms both forward sensitivity analysis and finite differences ., For 219 parameters , adjoint sensitivity analysis is 48-times faster than forward sensitivities and 72-times faster than finite differences ., To ensure that the observed speedup is not unique to the model of ErbB signaling ( BM3 ) we also evaluated the speedup of adjoint sensitivity analysis over forward sensitivity analysis on models B2-5 and BM1-2 ., The results are presented in Fig 2b and 2c ., We find that for all models , but model B3 , gradient calculation using adjoint sensitivity is computationally more efficient than gradient calculation using forward sensitivities ( speedup > 1 ) ., For model B3 the backwards integration required a much higher number of integration steps ( 4 \u22c5 106 ) than the forward integration ( 6 \u22c5 103 ) , which results to a poor performance of the adjoint method ., One reason for this poor performance could be that , in contrast to other models , the right hand side of the differential equation of model B3 consists almost exclusively of non-linear , non-mass-action terms ., Excluding model B3 we find an polynomial increase in the speedup with respect to the number of parameters n\u03b8 ( Fig 2b ) , as predicted by theory ., Moreover , we find that the product n\u03b8 \u22c5 nx , which corresponds to the size of the system of forward sensitivity equations , is an even better predictor ( R2 = 0 . 99 ) than n\u03b8 alone ( R2 = 0 . 83 ) ., This suggest that adjoint sensitivity analysis is not only beneficial for systems with a large number of parameters , but can also be beneficial for systems with a large number of state variables ., As we are not aware of any similar observations in the mathematics or engineering community , this could be due to the structure of biological reaction networks ., Our results suggest that adjoint sensitivity analysis is an excellent candidate for parameter estimation in large-scale models as it provides good scaling with respect to both , the number of parameters and the number of state variables ., Efficient local optimization requires accurate and robust gradient evaluation 18 ., To assess the accuracy of the gradient computed using adjoint sensitivity analysis , we compared this gradient to the gradients computed via finite differences and forward sensitivity analysis ., Fig 3 visualizes the results for the model of ErbB signaling ( BM3 ) at the nominal parameter \u03b80 which was provided in the SBML definition ., The results are similar for other starting points ., The comparison of the gradients obtained using finite differences and adjoint sensitivity analysis revealed small discrepancies ( Fig 3a ) ., The median relative difference ( as defined in S1 Supporting Information Section, 2 ) between finite differences and adjoint sensitivity analysis is 1 . 5 \u22c5 10\u22123 ., For parameters \u03b8k to which the objective function J was relatively insensitive , \u2202J\/\u2202\u03b8k < 10\u22122 , there are much higher discrepancies , up to a relative error of 2 . 9 \u22c5 103 ., Forward and adjoint sensitivity analysis yielded almost identical gradient elements over several orders of magnitude ( Fig 3b ) ., This was expected as both forward and adjoint sensitivity analysis exploit error-controlled numerical integration for the sensitivities ., To assess numerical robustness of adjoint sensitivity analysis , we also compared the results obtained for high and low integration accuracies ( Fig 3c ) ., For both comparisons we found the similar median relative and maximum relative error , namely 2 . 6 \u22c5 10\u22126 and 9 . 3 \u22c5 10\u22124 ., This underlines the robustness of the sensitivitity based methods and ensures that differences observed in Fig 3a indeed originate from the inaccuracy of finite differences ., Our results demonstrate that adjoint sensitivity analysis provides objective function gradients which are as accurate and robust as those obtained using forward sensitivity analysis ., As adjoint sensitivity analysis provides accurate gradients for a significantly reduced computational cost , this can boost the performance of a variety of optimization methods ., Yet , in contrast to forward sensitivity analysis , adjoint sensitivities do not yield sensitivities of observables and it is thus not possible to approximate the Hessian of the objective function via the Fisher Information Matrix 46 ., This prohibits the use of possibly more efficient Newton-type algorithms which exploit second order information ., Therefore , adjoint sensitivities are limited to quasi-Newton type optimization algorithms , e . g . the Broyden-Fletcher-Goldfarb-Shanno ( BFGS ) algorithm 47 , 48 , for which the Hessian is iteratively approximated from the gradient during optimization ., In principle , the exact calculation of the Hessian and Hessian-Vector products is possible via second order forward and adjoint sensitivity analysis 49 , 50 , which possess similar scaling properties as the first order methods ., However , both forward and adjoint approaches come at an additional cost and are thus not considered in this study ., To assess whether the use of adjoint sensitivities for optimization is still viable , we compared the performance of the interior point algorithm using adjoint sensitivity analysis with the BFGS approximation of the Hessian to the performance of the trust-region reflective algorithm using forward sensitivity analysis with Fisher Information Matrix as approximation of the Hessian ., For both algorithms we used the MATLAB implementation in fmincon . m ., The employed setup of the trust-region algorithm is equivalent to the use of lsqnonlin . m which is the default optimization algorithm in the MATLAB toolbox Data2Dynamics 7 , which was employed to win several DREAM challenges ., For the considered model the computation time of forward sensitivities is comparable in Data2Dynamics and AMICI ., Therefore , we expect that Data2Dynamics would perform similar to the trust-region reflective algorithm coupled to forward sensitivity analysis ., We evaluated the performance for the model of ErbB signaling based on 100 multi-starts which were initialized at the same initial points for both optimization methods ., For 41 out of 100 initial points the gradient could not be evaluated due numerical problems ., These optimization runs are omitted in all further analysis ., To limit the expected computation to a bearable amount we allowed a maximum of 10 iterations for the forward sensitivity approach and 500 iterations for the adjoint sensitivity approach ., As the previously observed speedup in gradient computation was roughly 48 fold , we expected this setup should yield similar computation times for both approaches ., We found that for the considered number of iterations , both approaches perform similar in terms of objective function value compared across iterations ( Fig 4a and 4b ) ., However , the computational cost of one iteration was much cheaper for the optimizer using adjoint sensitivity analysis ., Accordingly , given a fixed computation time the interior-point method using adjoint sensitivities outperforms the trust-region method employing forward sensitivities and the FIM ( Fig 4c and 4d ) ., In the allowed computation time , the interior point algorithm using adjoint sensitivities could reduce the objective function by up to two orders of magnitude ( Fig 4c ) ., This was possible although many model parameters seem to be non-identifiable ( see S1 Supporting Information Section 4 ) , which can cause problems ., To quantify the speedup of the optimization using adjoint sensitivity analysis over the optimization using forward sensitivity analysis , we performed a pairwise comparison of the minimal time required by the adjoint sensitivity approach to reach the final objective function value of the forward sensitivity approach for the individual points ( Fig 4e ) ., The median speedup achieved across all multi-starts was 54 ( Fig 4f ) , which was similar to the 48 fold speedup achieved in the gradient computation ., The availability of the Fisher Information Matrix for forward sensitivities did not compensate for the significantly reduced computation time achieved using adjoint sensitivity analysis ., This could be due to the fact that adjoint sensitivity based approach , being able to carry out many iterations in a short time-frame , can build a reasonable approximation of the Hessian approximation relatively fast ., In summary , this application demonstrates the applicability of adjoint sensitivity analysis for parameter estimation in large-scale biochemical reaction networks ., Possessing similar accuracy as forward sensitivities , the scalability is improved which results in an increased optimizer efficiency ., For the model of ErbB signaling , optimization using adjoint sensitivity analysis outperformed optimization using forward sensitivity analysis ., Mechanistic mathematical modeling at the genome scale is an important step towards a holistic understanding of biological processes ., To enable modeling at this scale , scalable computational methods are required which are applicable to networks with thousands of compounds ., In this manuscript , we present a gradient computation method which meets this requirement and which renders parameter estimation for large-scale models significantly more efficient ., Adjoint sensitivity analysis , which is extensively used in other research fields , is a powerful tool for estimating parameters of large-scale ODE models of biochemical reaction networks ., Our study of several benchmark models with up to 500 state variables and up to 1801 parameters demonstrated that adjoint sensitivity analysis provides accurate gradients in a computation time which is much lower than for established methods and effectively independent of the number of parameters ., To achieve this , the adjoint state is computed using a piece-wise continuous backward differential equation ., This backward differential equation has the same dimension as the original model , yet the computation time required to solve it usually is slightly larger ., As a result , finite differences and forward sensitivity analysis might be more efficient if the sensitivities with respect to a few parameters are required ., The same holds for alternatives like complex-step derivative approximation techniques 51 and forward-mode automatic differentiation 28 , 52 ., For systems with many parameters , adjoint sensitivity analysis is advantageous ., A scalable alternative might be reverse-mode automatic differentiation 28 , 53 , which remains to be evaluated for the considered class of problems .,","headings":"Introduction, Methods, Results, Discussion","abstract":"Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation ( ODE ) models has improved our understanding of small- and medium-scale biological processes ., While the same should in principle hold for large- and genome-scale processes , the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far ., While individual simulations are feasible , the inference of the model parameters from experimental data is computationally too intensive ., In this manuscript , we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks ., We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology ., Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis ., The computational complexity is effectively independent of the number of parameters , enabling the analysis of large- and genome-scale models ., Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods ., The proposed method will facilitate mechanistic modeling of genome-scale cellular processes , as required in the age of omics .","summary":"In this manuscript , we introduce a scalable method for parameter estimation for genome-scale biochemical reaction networks ., Mechanistic models for genome-scale biochemical reaction networks describe the behavior of thousands of chemical species using thousands of parameters ., Standard methods for parameter estimation are usually computationally intractable at these scales ., Adjoint sensitivity based approaches have been suggested to have superior scalability but any rigorous evaluation is lacking ., We implement a toolbox for adjoint sensitivity analysis for biochemical reaction network which also supports the import of SBML models ., We show by means of a set of benchmark models that adjoint sensitivity based approaches unequivocally outperform standard approaches for large-scale models and that the achieved speedup increases with respect to both the number of parameters and the number of chemical species in the model ., This demonstrates the applicability of adjoint sensitivity based approaches to parameter estimation for genome-scale mechanistic model ., The MATLAB toolbox implementing the developed methods is available from http:\/\/ICB-DCM . github . io\/AMICI\/ .","keywords":"applied mathematics, simulation and modeling, algorithms, optimization, genomic databases, mathematics, genome analysis, research and analysis methods, genome complexity, biological databases, differential equations, biochemistry, biochemical simulations, database and informatics methods, genetics, biology and life sciences, physical sciences, genomics, computational biology","toc":null} +{"Unnamed: 0":2222,"id":"journal.pcbi.1005747","year":2017,"title":"Exploiting ecology in drug pulse sequences in favour of population reduction","sections":"To quantify pulse efficiency , we primarily study the minimal population size nmin of our two-species system as a proxy for the extinction probability of the population ., Antibiotic stewardship programmes suggest that for some diseases , such as pneumonia , the immune system can clear the residual infection once the bacterial population size is sufficiently reduced 19 , 20 ., Thus , the minimal population size may be a more relevant parameter than the exact extinction probability itself ., Additionally , the general behaviour of the deterministic system and its observable nmin is more robust than the extinction probability in the stochastic model , as in the latter , the precise form of stochastic noise , or the system size , would be important ., The total population minimum nmin can still serve to gauge the latter , which scales as exp ( nmin ) 21 ., Before introducing our model in detail , we jump ahead and summarise the essential result of our work in Fig 1 , which we discuss later in more detail ., Fig 1 shows the value of the minimal total population size in the configuration space of drug concentration profiles , spanned by the width of the pulse ( or duration of its high stress environment ) on the x-axis and the form of the pulse on the y-axis , which will be explained later ., Practically , these two properties of a pulse\u2014its high stress duration and its form\u2014are likely constrained: a very long duration of the high stress environment or stronger drug might be detrimental for patients due to , for example , a destructive impact on the gut microbiome 22 , 23 ., Similarly , some pulse forms , such as those where the highest possible drug concentration suddenly drops to zero at the pathogen location at the end of the pulse ( here denoted by temporal skewness s = 1 ) , may not be realistic for clinical treatments ., However , since we do not want to make any assumptions on which parts of the configuration space should be accessible , we examine our system for all possible combinations of pulse form and durations of the high stress environments ., The colour code ( symbols ) signifies which of the four possible pulse sequences sketched on the right of Fig 1 most effectively reduces the population size ., Fig 1 clearly shows that in our simple model setup , different pulse sequences are favourable in different regions of configuration space ., The aim of this work is to outline phenomenologically which pulse sequence yields the lowest minimal population for which part of configuration space in Fig 1 , and might therefore be most likely to drive the species to extinction ., The best pulse sequence at any one point of Fig 1 tends to be the one that maximally exploits the competition between the more resistant and wild-type species , represented by logistic growth in our model ., The simplicity of our approach makes explicit why some references might argue for more moderate treatments involving e ., g ., shorter or lower drug concentrations , but also what the limitation of models and observables are and hence why such moderate treatments may not work in real setups ., We also examine how the population composition ( a measure of how strongly the more resistant species dominates the population ) evolves , should such a pulse sequence not lead to achieving extinction ., Finally , we highlight the need for microbial experiments in such temporally varying drug gradients , in order to evaluate the applicability of simple models to real systems ., The simplest model that can be used to study the effect of the temporal concentration profile on a heterogeneous population ( n ) consists of two phenotypically different species , a susceptible \u201cwild type\u201d species ( w ) , and a more tolerant or resistant species ( r ) ., Its increased resistance comes at the cost of a reduced fitness in the drug-free environment , which is reflected in a smaller growth rate ., As in previous works 24 , 25 , we assume that the drug is bacteriostatic , that is , it only affects growth , such that growth of each species ceases as soon as its minimum inhibitory concentration ( MIC ) is exceeded ., Thus , in this deterministic population dynamics model for the birth-death process , sketched in the inset of Fig 2 , the growth rate of each species \u03b7 \u2208 {r , w} is given by \u03d5\u03b7 ( t , n ( t ) ) = \u0398MIC\u03b7 \u2212 c ( t ) \u03bb\u03b7 ( 1 \u2212 n ( t ) ) , where n ( t ) = w ( t ) + r ( t ) is the total number of species at time t expressed in terms of a carrying capacity , which does not require specification as it serves merely as a unit for the population size ., The Heaviside-Theta step function \u0398 implies that the growth rate is only non-zero when the drug concentration is lower than the MIC of the corresponding species ., The index \u03b7 \u2208 {r , w} refers to the type of species ( resistant or wild-type ) , and \u03bb\u03b7 is its growth rate ., The more resistant species has a lower basal growth rate in the drug-free environment , i . e . , \u03bbr = \u03bbw \u2212 k\u2254 \u03bb \u2212 k , where k > 0 can be interpreted as a cost that the resistant species incurs for being more resistant ., The logistic growth assumed in this model introduces competition between the wild-type and the resistant species for limited space and\/or resources , and places an upper bound on the population size ., We also include a constant death rate \u03b4 for both species , meaning that a species decays at rate \u03b4 when c ( t ) >MIC\u03b7: For these higher concentrations , growth of species \u03b7 is inhibited and , since switching is negligible , the species can only die ., All rates and times in this work are given in units of \u03bb ., The time evolution of the population can be studied in terms of the differential equations, w \u02d9 ( t ) = \u03d5 w ( t , n ( t ) ) - \u03b4 - \u03bc w w ( t ) + \u03bc r r ( t ) r \u02d9 ( t ) = \u03bc w w ( t ) + \u03d5 r ( t , n ( t ) ) - \u03b4 - \u03bc r r ( t ) ( 1 ) since for sufficiently large populations stochastic fluctuations can be neglected ., The two species are coupled via the competition from logistic growth , as well as via the switching rates \u03bcw and \u03bcr ., Phenotypically more resistant states can be characterised by a reduced growth rate , or complete growth arrest , often known as tolerance or persistence 26\u201328 ( for a recent review , see Ref . 29 ) ., Provided that \u03bcw , r \u226a \u03b4 , which is the case for both mutation and phenotypic switching , our choice of \u03bcw = 10\u22126 \u03bb and \u03bcr = 0 does not qualitatively affect the results ., For this entire work , we used exemplary values of \u03b4 = 0 . 1\u03bb and k = 0 . 1\u03bb , where \u03bb \u2261 1 , i ., e ., we used \u03bb as the basic unit of time ., We investigate several other combinations of costs and death rates , in particular combinations with the same death rate , but a smaller and larger cost , in S1 Text ., There , we show that our results and general statements are still valid for these cases ., We chose the values of \u03b4 = 0 . 1 and k = 0 . 1 since this combination allowed us to show the complete and most general picture of possible best pulse shapes in Fig, 1 . A smaller ( yet also biologically possible ) fitness cost would not have contained all different scenarios ., We ask the reader to refer to S1 Text for more details ., Since in our model the only relevant information about the antibiotic concentration is whether it is above or below the MIC of the corresponding species , any pulse sequence is fully determined by the temporal arrangement of low-stress ( low ) and high-stress ( high ) environments ., In these ( low ) and ( high ) environments , the antibiotic concentration is low , MICw < c ( t ) < MICr , or high , c ( t ) > MICr , respectively ( sketched for a single pulse in the top panel of Fig 2 ) ., Before the pulse sequence , the system is in the drug-free environment ( free ) , where the concentration of the antibiotic c ( t ) is less than either MIC , c ( t ) < MICw , r ., We assume that the ( free ) environment appears only before , but not during , a pulse sequence ., Thus , the ( free ) environment determines the initial condition of the population , which we take to be at its fixed point , ( w ( t = 0 ) , r ( t = 0 ) ) = ( w ( free ) * , r ( free ) * ) , shown as the purple dot near the w-axis of the phase space panel ( free ) of Fig, 2 . The change in population size and composition in each of these environments is characterised by the flow field in phase space ( w , r ) , shown in the three lower panels of Fig, 2 . In the ( low ) environment , the population flows towards the more resistant species ( high r and low w ) , while in the ( high ) environment , it flows towards the origin , meaning that both species die out exponentially ., Thus , the effect of a single pulse on the population crucially depends on the times spent by the system in the ( low ) and ( high ) environments ., A single pulse involves a single ( connected ) environment of ( high ) antibiotic exposure , with ( low ) environment potentially preceding or succeeding this ( high ) environment ., In reality , the duration of these ( low ) environments will depend on the experimental setup or host ., A pulse sequence is composed of a succession of identical single pulses ., We refer to the total time of the pulse as \u03c4 , and the time during which the system is in the ( high ) environment as tr ., The time periods during which the system is in the ( low ) environment ( initially ) before tr and ( finally ) after tr are denoted by t w ( i ) and t w ( f ) , respectively ., As this would overparameterise the pulse , we combine the latter two time scales into a skewness parameter s = ( t w ( i ) - t w ( f ) ) \/ ( \u03c4 - t r ) , signifying how tr is positioned within \u03c4 ., Skewness s = \u2212 1 ( s = 1 ) thus denotes a pulse which starts ( ends ) with the ( high ) environment , while skewness s = 0 denotes a symmetric pulse ., We compared pulse sequences of up to N = 4 pulses ( same tr and, s ) for constant treatment time \u03c4 for all possible skewnesses s and durations tr ., Thus , a single pulse with \u03c4 = 60 and given tr and s is compared with a sequence of N identical pulses , each defined by \u03c4 ( N ) = 60\/N and t r ( N ) = t r \/ N and s ., ( Further values of \u03c4 are discussed in S1 Text ) ., The retention of the same skewness within a sequence is motivated by the fact that we assume that the rate of increase or decrease in concentration is primarily determined by the host system of the bacteria ., In this comparison , the \u2018best\u2019 pulse sequence for given ( tr ,, s ) is defined as the one that yields the lowest population minimum nmin and so has the highest likelihood of eliminating the pathogen ., In situations where the entire configuration space is accessible , the maximal tr yields the overall lowest population minimum , independent of skewness s ., Since practically the maximal duration tr acceptable for treatments may be limited , it is important to know which pulse sequence is best for each ( tr ,, s ) , such that we can provide intuition on any situation and parameter choice that may arise ., The colour ( and corresponding symbols ) in Fig 1 show the best pulse sequences ( i . e . the best N ) , and the shade indicates the value of nmin ( dark denotes high values ) ., We found that a single pulse is most effective over a large range of parameters ( blue in Fig 1 ) ., In particular , for each duration in the ( high ) environment tr , the lowest minimum across all skewnesses is obtained by a single pulse ( blue line ) ., This means that in practical situations which allow all different pulse skewnesses , a single pulse with a skewness on the blue line would give the lowest minimum ., If , however , the possible pulse skewness is limited due to the host setup , a single pulse may not be the best choice ., For ease of comparison , Fig 3 a shows nmin for just a single pulse of constant treatment time \u03c4 = 60 , with the white line marking the lowest minimum ( the blue line in Fig 1 ) ., In the next paragraph we focus on a single pulse in order to understand which pulse parameters ( s , tr ) yield this lowest minimum ., In the previous section , we learnt which pulse sequences yield the maximal relative reduction in population size for which regions in ( tr ,, s ) -space ., This minimal population nmin served as a proxy for gauging when extinction would most likely occur in a setting where an immune response can destroy the population when it is already small ., Now , we would like to address a complementary question: in the event that extinction does not occur , whether because nmin was too high , or the population was small for too short , what is the effect of such a \u2018failed\u2019 pulse on the bacterial population ?, We already saw that the composition of the population shifts more towards r with each pulse ., In terms of real treatments , it might often be better not to pursue treatments which , if unsuccessful , entail a high risk of creating a fully resistant population ., In order to evaluate the pulsed treatments associated with the most effective population reductions based on Fig 1 , we now focus on the population composition , quantified by the ratio of resistant to wild-type species , r\/w , at the end of the best pulse within the best sequence ., Evaluating r\/w at the end of the pulse that yields the global minimum is motivated by the fact that the treatment can be stopped after , but not during , an individual pulse ., Fig 5b shows the dependence of r\/w on the pulse configuration , which can be best understood by first considering how the population evolves in ( w ,, r ) phase space during the different pulse sequences ., In Fig 5a , we show trajectories for three pulse sequences , consisting of a single , two , or three pulses respectively , with \u03c4 = 60 and tr = 10 ., The qualitative behaviour of the phase space trajectory is independent of skewness ( in Fig 5a , s = 0 . 9 , the corresponding trajectories for s = \u22120 . 5 , s = 0 . 2 and s = 0 . 5 can be found in Fig D in S1 Text ) ., The colour of the trajectory darkens progressively with every pulse in the sequence ., The trajectory starts at the ( free ) fixed point close to the w-axis beyond the limits of Fig 5a , and evolves towards the r-axis ., Within each sequence , r\/w steadily increases from pulse to pulse , as r progressively takes over the population during the ( low ) regimes ., Thus , in the top left corner of Fig 5b , where the first pulse of the sequence yields the lowest minimum , r\/w is comparatively smaller ( lighter shading ) ., Indeed , the higher the N of the best sequence , the lower the ratio in Fig 5b , provided the global minimum is reached in the first pulse ( such as in the red region in Fig 1 ) ., The region marked with the white line in Fig 1 , where intermediate pulses ( and not the first pulse ) in the sequence yielded the lowest global minimum , also shows up clearly as darker in Fig 5b ., Here , r has grown more than for a single pulse , as more pulses were applied before the population minimum was reached ., Thus , in our model , when both population reduction and composition are considered , pulse sequences where the minimum is attained in the first pulse are generally more effective than a single long pulse: maintaining the ( low ) regime in the first pulse for around to keeps r\/w as well as nmin small ., This argument suggests that treating with this first pulse only achieves the best result , and additionally comes with a shorter total treatment duration \u03c4 and a shorter tr ., We would like to note that even if the population does not die out during this short treatment , multiple pulses of this form could be added in order to give the immune system more opportunities to eliminate the infection ., These additional pulses would not drastically change r\/w compared to the composition obtained after the single long pulse of \u03c4 = 60 ., This can be seen also in Fig 5a , where for all pulse sequences the population composition is similar at the end of the entire sequence ., Experiments with microbes can help investigate minimal antibiotic dosages and treatment times in a well-controlled test tube setup , where the impact of certain treatments on the microbial species itself can be studied without interfering effects , for example from the immune system ., Such microbial experiments have , for example , helped suggest drug combinations or treatment regimens which could retard the development of antibiotic resistance 30\u201333 ., Increasingly , these experiments try to incorporate practically important aspects of heterogeneities in the environment 34 , such as drug concentration gradients ., These gradients can enhance the development of bacterial resistance relative to spatially homogeneous systems 24 , 25 , 35 , 36 , as the more resistant species can successfully compete with a faster growing , but more susceptible wild-type species ., Not enhancing the selective advantage of the more resistant species , in the context of temporal heterogeneities in drug levels , including the duration , frequency and even the concentration profile during a single antibiotic pulse , as studied also in this work , is also important in real treatments 7 , 37 , and is thus within the limits of current experiments ., Our model makes two drastic simplifications compared to real microbial species ., First , we study only two species , instead of a series of possible phenotypically or genotypically different species ., Typically , the evolutionary pathway that leads to a fully resistant species involves a variety of intermediate mutants , even when the mutational paths are constrained 38 ., Since the fitness benefit diminishes with each successive mutation in a series 39 , 40 , we assumed that the strongest effect is conferred by the first mutation , and neglected all higher order mutants ., For phenotypic switches , it is reasonable to consider only two species , corresponding to , for example , the expression or repression of a protein 41 , 42 ., Thus , our model should be applicable to experimental systems , while in real patients , different types of tolerant or persister cells might be involved 43 , or even interact 44 ., The second simplification concerns how these two species are affected by the antibiotic ., In our model , we assume that the antibiotic is bacteriostatic , i . e . only affects the growth of the species 24 , 25 ., We also assume that the growth rate of each species falls abruptly to zero when the antibiotic concentration is higher than their respective minimum inhibitory concentration ( MIC ) ( see e . g . 45 ) ., The experimental situation is more complex: cessation of growth is not instantaneous , the space occupied by a dead cell may not immediately become available 42 , and the general use of the MIC as an indicator for slow growth is questionable 46 ., However , an abrupt change in growth rate at MIC has been verified experimentally for E . coli and chloramphenicol ., 25 ., Additionally , our analysis is based on large numbers rather than extinction events which would be model specific; thus , small changes in the model ( such as reduced but non-zero growth rate ) should still give qualitatively similar results ., Evaluating the effect of different pulse sequences should be possible within a microfluidics setup , where , for example , periodically fluctuating environments have already been investigated for E . coli and tetracycline 42 ., We expect that one should be able to observe that the ( low ) environment of drug concentration can be exploited in order to increase extinction probabilty for a ( high ) environment that is present for as short as possible , with the treatment time being constant ., How long this duration of the ( low ) environment is for best exploitation would be sensitive to the growth rate of the more resistant species , which for tetracycline could be generated using a specific promoter , namely the agn43 promotor 42 , 47 , 48 ., Just as shown in Fig 1 , we expect higher N pulse sequences to do better when this duration is optimal for them , but not for the longer pulse ., In addition , further study of E . coli in combination with other antibiotics and more resistant strains should also show this , in addition to being more realistic than our simple model .","headings":"Introduction, Materials and methods, Results, Discussion","abstract":"A deterministic population dynamics model involving birth and death for a two-species system , comprising a wild-type and more resistant species competing via logistic growth , is subjected to two distinct stress environments designed to mimic those that would typically be induced by temporal variation in the concentration of a drug ( antibiotic or chemotherapeutic ) as it permeates through the population and is progressively degraded ., Different treatment regimes , involving single or periodical doses , are evaluated in terms of the minimal population size ( a measure of the extinction probability ) , and the population composition ( a measure of the selection pressure for resistance or tolerance during the treatment ) ., We show that there exist timescales over which the low-stress regime is as effective as the high-stress regime , due to the competition between the two species ., For multiple periodic treatments , competition can ensure that the minimal population size is attained during the first pulse when the high-stress regime is short , which implies that a single short pulse can be more effective than a more protracted regime ., Our results suggest that when the duration of the high-stress environment is restricted , a treatment with one or multiple shorter pulses can produce better outcomes than a single long treatment ., If ecological competition is to be exploited for treatments , it is crucial to determine these timescales , and estimate for the minimal population threshold that suffices for extinction ., These parameters can be quantified by experiment .","summary":"The possibilities of lower antibiotic dosages and treatment times , as demanded by antibiotic stewardship programmes have been investigated with complex mathematical models to account for , for example , the presence of an immune host ., At the same time , microbial experiments are getting better at mimicking real setups , such as those where the drug gradually permeates in and out of the region with the infectious population ., Our work systematically discusses an extremely simple and thus conceptually easy model for an infectious two species system ( one wild-type and one more resistant population ) , interacting via logistic growth , subject to low and high stress environments ., In this model , well-defined timescales exist during which the low stress environment is as efficient in reducing the population as the high stress environment ., We explain which temporal patterns of low and high stress , corresponding to sequences of drug treatments , lead to the best population reduction for a variety of durations of high stress within a constant long low stress environment ., The complexity of the spectrum of best treatments merits further experimental investigation , which could help clarify the relevant timescales ., This could then give useful feedback towards the more complex models of the medical community .","keywords":"antimicrobials, medicine and health sciences, ecology and environmental sciences, drugs, immunology, microbiology, antibiotic resistance, probability distribution, mathematics, pharmaceutics, antibiotics, pharmacology, population biology, skewness, research and analysis methods, conservation biology, sequence analysis, antimicrobial resistance, bioinformatics, probability theory, immune system, conservation science, population metrics, species extinction, population size, database and informatics methods, microbial control, biology and life sciences, physical sciences, drug therapy, evolutionary biology, evolutionary processes","toc":null} +{"Unnamed: 0":1021,"id":"journal.pbio.2004015","year":2018,"title":"Manipulating the revision of reward value during the intertrial interval increases sign tracking and dopamine release","sections":"Lesaint and colleagues 1 recently proposed a new computational model\u2014the \u201cSTGT model\u201d ( for sign tracking and goal tracking ) \u2014which accounts for a large set of behavioral , physiological , and pharmacological data obtained from studies investigating individual variation in Pavlovian conditioned approach behavior 2\u20138 ., Most notably , the model can account for recent work by Flagel and colleagues ( 2011 ) that has shown that phasic dopamine ( DA ) release does not always correspond to a reward prediction error ( RPE ) signal arising from a classical model-free ( MF ) system 9 ., In their experiments , Flagel and colleagues trained rats on a classical autoshaping procedure , in which the presentation of a retractable-lever conditioned stimulus ( CS; 8 s ) was followed immediately by delivery of a food pellet ( unconditioned stimulus US ) into an adjacent food cup ., In procedures like this , some rats , known as sign trackers ( STs ) , learn to rapidly approach and engage the CS lever , whereas other rats , known as goal trackers ( GTs ) , learn to approach and enter the food cup upon presentation of the CS lever ., Although both sign and goal trackers learn the CS-US relationship equally well , it was elegantly shown that phasic DA release in the nucleus accumbens core ( NAc ) matched RPE signals only in STs 4 ., Specifically , during learning in ST rats , DA release to reward decreased , while DA release to the CS increased ., In contrast , even though GTs acquired a Pavlovian conditioned approach response , DA release to reward did not decline , and CS-evoked DA was weaker ., Furthermore , administration of a DA antagonist blocked acquisition of the ST conditioned response but did not impact the GT conditioned response 4 , 10 ., Several computational propositions have argued that these data could be interpreted in terms of different contributions of model-based ( MB ) \u2014with an explicit internal model of the consequences of actions in the task\u2014and MF\u2014without any internal model\u2014reinforcement learning ( RL ) in GTs and STs during conditioning 1 , 11 ., Nevertheless , only the STGT model predicted that manipulating the intertrial interval ( ITI ) should change DA signaling in these animals: the model suggests that GTs revise the food cup value multiple times during and in between trials during the 90-s ITI ., During the trial , the food cup gains value because reward is delivered; however , visits to the food cup during the ITI do not produce reward , thus reducing the value assigned to the food cup ., This mechanism prevents the progressive transfer of reward value signal in the model from US time to CS time and hence explains the absence of DA RPE pattern in goal trackers ., This aspect of the model predicts that decreasing the ITI should reduce the amplitude of US DA burst ( i . e . , less time to negatively revise the value of the food cup and reduce the size of the RPE ) and that higher food cup value should lead to an increase in the tendency to GT in the overall population ., In contrast , increasing the ITI should have the opposite effect ., That is , lengthening the ITI and therefore increasing the number of nonrewarded food cup entries should increase the amplitude of US DA burst ( i . e . , more time to negatively revise the value of the food cup during the ITI and increase the size of the RPE ) and lower the value of the food cup , leading to a decreased tendency to GT and an increase tendency to ST . The latter would be accompanied by a large phasic DA response to the highly salient lever CS , as previously observed in STs 4 ., Here , we tested these predictions by recording DA release in the NAc using fast-scan cyclic voltammetry ( FSCV ) during 10 d of Pavlovian conditioning in rats that had either a short ITI of 60 s or a long ITI of 120 s ., DA release was recorded from NAc ( S1B\u2013S1E Fig ) during a standard Pavlovian conditioned approach behavior task ( S1A Fig ) for 10 d ., Each trial began with the presentation of a lever ( CS ) located to the left or right side of a food cup ( counterbalanced ) for 8 s ., Upon the lever\u2019s retraction , a 45-mg sucrose pellet was delivered into the food cup , independent of any interaction with the lever ., Each behavioral session consisted of 25 trials presented at a random time interval of either 60 s ( n = 7 rats ) or 120 s ( n = 12 rats ) ., To quantify the degree to which rats engaged in sign- versus goal-tracking behavior , we used the Pavlovian Conditioned Approach ( PCA ) index 12 , which comprised the average of three ratios: ( 1 ) the response bias , which is ( Lever Presses \u2212 Food Cup Entries ) \/ ( Lever Presses + Food Cup Entries ) , ( 2 ) the probability ( P ) difference , which is ( Plever \u2212 Preceptacle ) , and ( 3 ) the latency index , which is ( x\u00af Cup Entry Latency \u2212 x\u00af Lever Press Latency ) \/ 8 ., All of these ratios range from \u22121 . 0 to +1 . 0 ( similarly for PCA index ) and are more positive and negative for animals that sign track and goal track , respectively ., All behavioral indices were derived from sessions during which DA was recorded ., For the initial analysis described in this section , behavior and DA were examined across all sessions; the development of behavior and DA over training is examined in later sections ., The distributions of behavioral session scores are shown in Fig 1A\u20131D for each group ., As predicted , rats with the 120-s ITI tended to sign track more , whereas rats with the 60-s ITI tended to goal track more ., Across all behavioral indices ( i . e . , response bias , probability , latency , PCA ) , the mean distributions were positive ( biased toward sign tracking ) and significant from sessions for rats in the 120-s ITI group ( Fig 1A\u20131D , left; Wilcoxon; \u03bc\u2019s > 0 . 17 , p < 0 . 05 ) ., Opposite trends were observed in the 60-s ITI group in that all distributions were negatively shifted from zero ( Fig 1A\u20131D , right; Wilcoxon; response bias: \u03bc = \u22120 . 06 , p = 0 . 06; lever probability: \u03bc = \u22120 . 03 , p = 0 . 58; PCA index: \u03bc = \u22120 . 11 , p = 0 . 097 ) ; however , only the shift in the latency difference distribution reached significance ( Fig 1C , right; Wilcoxon; \u03bc = \u22120 . 10; p < 0 . 05 ) ., Direct comparisons between 60-s and 120-s ITI groups produced significant differences across all four measures ( Wilcoxons; p < 0 . 01 ) ., Thus , we conclude that lengthening the ITI increased sign-tracking behavior , as predicted by the STGT model 1 , 13 ., Notably , the degree of sign\/goal tracking within the 60-s ITI group was highly dependent on when behavior was examined during the 8-s CS period ., This is illustrated in Fig 1G and Fig 1H , which show percent beam breaks in the food cup ( solid lines ) and lever pressing ( dashed lines ) over the time of the trial ., Consistent with the ratio analysis described above ( Fig 1A\u20131D ) , rats in the 120-s ITI group ( red ) showed sustained pressing ( red dashed ) that started shortly after lever extension and persisted throughout the 8-s CS period , while showing no increase in food cup entries ( red solid ) after CS presentation ( Fig 1G , red solid versus dashed ) ., Although it is clear that rats in the 120-s ITI group sign track more than goal track during the CS period , the relationship between lever pressing and food cup entry was far more dynamic during sessions with 60-s ITIs ( Fig 1G; blue ) ., During 60-s ITI sessions , rats would briefly enter the food cup for approximately 2 s immediately upon CS presentation ( Fig 1G , solid blue ) , before engaging with the lever ( Fig 1G , dashed blue ) ., As a result , lever pressing was delayed in the 60-s ITI group relative to the 120-s ITI group ( Fig 1G and 1H; blue versus red dashed ) ., This suggests that the goal-tracking tendencies described above during the entire 8-s CS period were largely due to the distribution of behaviors observed early in the CS period ., To quantify this observation , we recomputed the PCA index using either the first or the last 4 s of the 8-s CS period ., For the 120-s ITI group , the PCA index was significantly shifted in the positive direction during both the first and last 4 s of the cue period ( i . e . , more sign tracking; Fig 1E and 1F , left; Wilcoxon; \u03bc\u2019s > 0 . 16; p < 0 . 05 ) ., For the 60-s ITI group , the PCA index was significantly shifted in the negative direction during the first 4 s ( i . e . , more goal tracking; Fig 1E , right; Wilcoxon; \u03bc = \u22120 . 16; p < 0 . 05 ) but not significantly shifted during the last 4 s ( Fig 1F , Wilcoxon; \u03bc = 0 . 01; p = 0 . 81 ) ., Interestingly , this part of the results goes beyond the STGT model , which simplifies time by considering a single behavior\/action during that period ., To further demonstrate sign- and goal-tracking tendencies over the 8-s cue period and the differences between groups , we simply subtracted 60-s ITI lever pressing and food cup entries from 120-s ITI lever pressing ( Fig 1I; orange ) and food cup entries ( Fig 1I; green ) , respectively ., Shortly after cue onset , the green line representing the difference between 120-s and 60-s ITI food cup entries dropped significantly below zero ., Throughout the cue period ( 8 s ) , there were more contacts with the food cup in sessions with a 60-s ITI compared with the 120-s ITI group ( green tick marks represent differences between 120-s and 60-s ITI across sliding 100-ms bins; t test; p < 0 . 05 ) ., For lever pressing ( orange ) , values were constantly higher shortly after the cue for the first half of the cue period ( orange tick marks represent differences between 120-s and 60-s ITI across sliding 100-ms bins; t test; p < 0 . 05 ) , indicating that there were more contacts with the lever in sessions with a 120-s ITI compared with the 60-s ITI group early in the cue period ., The behavioral data described above globally support model predictions that increasing and decreasing the ITI would produce more and less sign tracking , respectively ., Nevertheless , they also pave the way for improvements of the model by showing a rich temporal dynamic of behavior during the trial , rather than the single behavioral response per trial simulated in the model ., By plotting lever presses and food cup entries over time , we see that sometimes rats initially go to the lever and then go to the food cup , or vice versa ., In contrast , the model was designed to account only for the initial action performed by rats ., This was sufficient to account for the main results of the present study ., Nevertheless , it would be interesting to extend the model to enable it to account for different decisions made sequentially by the same animal during a given trial ., Next , we tested the prediction that longer ITIs would elevate DA release to the US , while shorter ITIs would reduce DA release to the US ., The average DA release over all sessions for the 60-s and 120-s groups is shown in Fig 2A ., Rats in the 120-s ITI group exhibited significantly higher DA release to the CS and the US relative to rats in the 60-s ITI group ( CS t test: t = 2 . 99 , df = 178 , p < 0 . 05; US t test: t = 3 . 07 , df = 178 , p < 0 . 05 ) ., In the 120-s ITI group , DA release to both the CS and the US was significantly higher than baseline ( CS t test: t = 14 . 77 , df = 119 , p < 0 . 05; US t test: t = 4 . 79 , df = 119 , p < 0 . 05 ) ; however , in the 60-s ITI group , this was only true during CS presentation ( t test: t = 7 . 34 , df = 59 , p < 0 . 05 ) ; DA release at the US was not different than baseline ( t test: t = 0 . 99 , df = 59 , p = 0 . 33 ) ., Similar results were obtained when averaging across sessions within each rat and then averaging across rats ( Fig 2B ) ; DA release was higher during the CS and US for rats in the 120-s ITI group ( CS t test: t = 1 . 87 , df = 17 , p < 0 . 05; US t test: t = 1 . 83 , df = 17 , p < 0 . 05 ) and was higher than baseline for both periods ( CS t test: t = 6 . 15 , df = 11 , p < 0 . 05; US t test: t = 2 . 16 , df = 11 , p < 0 . 05 ) , whereas DA release was only significantly higher during the CS period for rats in the 60-s ITI group ( CS t test: t = 6 . 68 , df = 6 , p < 0 . 05; US t test: t = 0 . 70 , df = 6 , p = 0 . 26 ) ., These results are in line with the STGT model , which predicted that reducing ITI duration would prevent the downward revision of the food cup value and hence would permit the high predictive value associated with the food cup to produce a DA response at CS but not US , consistent with the DA RPE hypothesis 9 ., Conversely and also consistent with model predictions , DA release during sessions with the longer ITI was significantly higher during US delivery because there were more positive RPEs , which may result from the positive surprise associated with being rewarded in a food cup whose value has been more strongly decreased during multiple visits to the food cup during long ITIs ., Nevertheless , at the CS time , the increased DA burst at CS indicates an even more complex process that goes beyond model predictions ., All of this suggests that DA release should be positively correlated with the time spent breaking the beam in the food cup during the ITI ., To test this hypothesis , we computed how much time was spent in the food cup during the ITI for each session ., This was done by determining the total number of beam breaks within each ITI ( 10-ms resolution ) and then averaging over trials to determine each session mean ., Importantly , the ITI time did not vary across sessions within each group , and the analysis was performed separately for the two groups ( 60-s group and 120-s group ) ., Thus , any correlation between DA and food cup interaction time during the ITI cannot reflect a correlation between DA and ITI time ., As expected , rats in the 120-s ITI group spent significantly more time in the food cup than did rats in the 60-s ITI group ( 120-s ITI group = 15 . 1 s; 60-s ITI group = 6 . 8 s; t test: t = 4 . 91 , df = 178 , p < 0 . 05 ) ., For both groups , there was a significant positive correlation between average time spent in the food cup during the ITI and DA release during the reward period ( Fig 2C , 120-s ITI: r2 = 0 . 12 , p < 0 . 05; Fig 2D , 60-s ITI: r2 = 0 . 08 , p < 0 . 05 ) ., During the cue period for the 120-s ITI group , but not the 60-s ITI group , there was also positive correlation ( Fig 2E , 120-s ITI: r2 = 0 . 04 , p < 0 . 05; Fig 2F , 60-s ITI: r2 = 0 . 01 , p = 0 . 36 ) ., Finally , when examining with data collapsed across both groups , there was a significant positive correlation during both cue and reward epochs ( Cue: r2 = 0 . 05 , p < 0 . 05; Reward: r2 = 0 . 14 , p < 0 . 05 ) ., Thus , we conclude that DA release to the CS and US tended to be higher the longer rats visited the food cup during the ITI ., In the analysis above , we averaged DA release and behavior from all recording sessions ., Next , we asked how behavior and DA release patterns evolved with training ., As a first step to addressing this issue , we recomputed the PCA analysis for the first and last 5 d of training ., For the 60-s ITI group , the PCA index distribution was significantly shifted in the negative direction ( i . e . , goal tracking ) during the first five sessions ( Wilcoxon; \u03bc = \u22120 . 38 , p < 0 . 05 ) but not in the last five sessions ( Wilcoxon; \u03bc = 0 . 15 , p = 0 . 07 ) ., Thus , early in training , rats with the 60-s ITI exhibited goal tracking more than sign tracking but did not fully transition to sign tracking , at least when we averaged over the last five sessions ., For the 120-s ITI group , the PCA index was significantly shifted in the positive direction ( i . e . , sign tracking ) during the last five sessions ( Wilcoxon; \u03bc = 0 . 28 , p < 0 . 05 ) but was not during the first five sessions ( Wilcoxon; \u03bc = 0 . 10 , p = 0 . 11 ) ., Thus , when the ITI was long ( 120 s ) , rats sign and goal tracked in roughly equal proportions during the first five sessions but tended to sign track significantly more during later sessions ., To more accurately pinpoint when during training rats in the 120-s group shift toward sign tracking , we examined the four distributions individually for each session ., Sign tracking became apparent during session 4 , when the latency and lever probability distributions first became significant ( Wilcoxon; latency: \u03bc = 0 . 28 , p < 0 . 05; lever probability: \u03bc = 0 . 40 , p < 0 . 05 ) ., To visualize changes in behavior and DA release that occurred before and after session 4 , we plotted food cup beam breaks , lever pressing , and DA release averaged across the first 3 d of training and across days 4\u201310 ( Fig 3; for visualization of behavior during each of the 10 sessions , please see S4 Fig ) ., Consistent with the distributions of behavioral indices described above , the 120-s ITI group showed roughly equal food cup entries and lever pressing during the CS period in the first 3 d of training ( Fig 3A , thin pink solid versus thin pink dashed ) , whereas later in training ( days 4\u201310; red ) , there was a strong preference for the lever ( Fig 3A; thick red dashed versus thick red solid ) ., Indeed , the distribution of PCA indices averaged during days 4\u201310 were significantly shifted in the positive direction ( Wilcoxon; \u03bc = 0 . 27 , p < 0 . 05 ) ., These results suggest that in sessions in which the ITI was set at 120 s , sign-tracking tendencies developed relatively quickly during the first several recording sessions ( Fig 3A and 3C ) ., This is consistent with the STGT model , which predicted that increasing the ITI duration would increase the global tendency to sign track within the population and would thus speed up the acquisition of lever pressing behavior 1 , 13 ., In contrast , the model also predicted that reducing the ITI duration would increase the global tendency to goal track and would thus slow down the acquisition of lever pressing behavior ., Interestingly , the behavior of the 60-s ITI group was far more complicated than behavior of the 120-s group , with changes in goal and sign tracking occurring over training and CS presentation time ., Early in training , rats in the 60-s ITI group clearly visited the food cup ( Fig 3B , solid turquoise ) more than they pressed the lever ( Fig 3B , dashed turquoise ) ; food cup entries increased shortly after presentation of the CS and continued throughout the CS period ( Fig 3B , solid turquoise ) ., During later sessions ( i . e . , 4\u201310 ) , rats in the 60-s ITI group still entered the food cup upon CS presentation\u2014which corresponds to the goal-tracking behavior predicted by the model in this case\u2014but this only lasted about 2 s , at which point they transitioned to the lever ( Fig 3B and 3D ) ., In sessions 4\u201310 , none of the distributions of behavioral indices were significantly shifted from zero when examining the CS period as a whole ( Wilcoxons; Response bias: \u03bc = 0 . 27 , p = 0 . 83; Latency: \u03bc = \u22120 . 05 , p = 0 . 13; Probability: \u03bc = 0 . 08 , p = 0 . 16; PCA: \u03bc = 0 . 02 , p = 0 . 82 ) or during the first half of the CS period ( Response bias: \u03bc = \u22120 . 11 , p = 0 . 027; Probability: \u03bc = \u22120 . 04 , p = 0 . 18; PCA: \u03bc = \u22120 . 07 , p = 0 . 25 ) ; however , when examining the last 4 s of the CS period , distributions were significantly shifted in the positive direction ( Wilcoxons; Response bias: \u03bc = 0 . 32 , p < 0 . 05; Probability: \u03bc = 0 . 28 , p < 0 . 05; PCA: \u03bc = 0 . 24 , p < 0 . 05 ) ., Together , this suggests that rats in the 60-s groups were largely goal tracking early in training and that over the course of training , goal-tracking tendencies did not disappear but became focused to early portions of the CS period , while sign-tracking behavior developed toward the end of the CS period , later in training ( Fig 3B and 3D; S4 Fig ) ., Interestingly , these results go again beyond the computational model and suggest that it should be extended to account for within-trial behavioral variations ., Behavioral analyses clearly demonstrate that manipulation of the ITI impacts sign- and goal-tracking behavior and that both groups learned that the CS predicted reward ( Fig 3; S4 Fig ) ., Next , we determined how DA patterns changed during training ., Fig 3E and 3F illustrate DA release averaged across the first 3 d and days 4\u201310 of sessions with 120-s and 60-s ITIs , respectively , and DA release for each session is plotted in Fig 3G and 3H ., As shown previously , both groups started with modest DA release to both the CS and US during the first session ( Fig 3G and 3H; trial 1 ) ., For the 120-s ITI group , DA release was significantly higher to CS presentation later ( red ) compared to earlier ( pink ) in learning ( Fig 3E; t test: t = 2 . 51 , df = 119 , p < 0 . 05 ) ., DA release during US delivery did not significantly differ between early and late phases of training ( t test: t = 1 . 27 , df = 119 , p = 0 . 21 ) ., Hence , similarly to the sign trackers in the original study of Flagel and colleagues ( 2011 ) , the increase of DA response to the CS is consistent with the RPE hypothesis ., The difference is that here , the increase in the time available to down-regulate the value associated with the food cup during the ITI may have resulted in a remaining positive surprise at the time of reward delivery , hence preventing the progressive decrease of response to the US across training , in accordance with the model predictions ., In the 60-s ITI group ( Fig 3F and 3H ) , DA release to the US was initially high during the first 3 d ( turquoise ) but declined during days 4\u201310 ( blue ) ., Directly comparing DA release during the first 3 d with the remaining days revealed significant differences during the US period ( t test: t = 1 . 14 , df = 59 , p < 0 . 05 ) but not the CS period ( t test: t = 0 . 08 , df = 59 , p = 0 . 93 ) ., As a consequence , their post-training DA pattern\u2014with a high response to the CS but no response to the US ( Fig 3F , blue ) \u2014now resembles the traditional RPE pattern ( i . e . , high CS DA and low US DA after learning ) ., This is a clear demonstration that the DA RPE signal can be observed in goal trackers with a manipulation of the ITI , as predicted by the STGT model ., In a final analysis , we examined DA patterns during pure sign and goal tracking within each ITI group ., For this analysis , we examined only sessions during which either the lever was pressed or the food cup was entered during the cue period ., As shown previously 4 , phasic DA responses were apparent during both the CS and US during sessions with goal tracking ( Fig 3I and 3J , GT = orange ) ., In addition to replicating previous results , the figure also illustrates modulation of the DA pattern in line with model predictions ., Specifically , it shows that the DA response to the US was higher in the 120-s group than in the 60-s group during both sign- and goal-tracking sessions ( sign-tracking: t test , t = 3 . 66 , df = 25 , p < 0 . 05; goal-tracking: t test , t = 1 . 44 , df = 29 , p = 0 . 16 ) and that the DA response to the US was significantly lower than the DA response to the CS in GTs of the 60-s group ( t = 3 . 87 , df = 17 , p < 0 . 05 ) , suggesting that even though there is still a DA response to the US , shortening the ITI reduced the US-evoked DA response compared with what has been previously reported 4 ., The results reported here support the STGT model\u2019s predictions that manipulating the ITI would impact the proportion of sign-tracking ( STs ) and goal-tracking ( GTs ) behaviors as well as DA release ., It predicted that shortening the ITI would result in fewer negative revisions of the food cup value and reduce the US DA burst ., It also predicted that the resulting higher food cup value would lead to an increase in the tendency to GT across sessions 1 , 13 , which it did ., The model also predicted that lengthening the ITI would have the opposite effect ., We found that there were significantly more food cup entries during the ITI for the 120-s ITI group and that they showed an increased tendency to sign track ., Furthermore , we show that the time spent in the food cup during the ITI was positively correlated with the amplitude of the CS and US DA bursts for the 120-s ITI group , which is consistent with the hypothesis that lengthening the ITI to allow for more time to decrease the value of the food cup would result in stronger positive RPEs during the trial ., Consistent with the model , we claim that increased sign tracking and DA release result from the additional time spent in the food cup during the ITI ., Indeed , these were positively correlated ., Importantly , this impact of ITI manipulations had not been predicted by other computational models of sign trackers and goal trackers 11 , 14 ., However , several alternative explanations should be considered , which may have also contributed to observed changes in behavior and DA release ., For example , it has been shown that rewards delivered after longer delays yield higher DA responses to the US 15\u201317 and that uncertain reward increases sign tracking 18 ., Although the reward was highly predictable in our study ( i . e . , always delivered 8 s after cue onset ) , it is possible that uncertainty associated with US delivery impacted behavior and DA release ., Notably , it is likely that these factors are intertwined in that manipulating delays and certainty impact the number of visits to the food cup that are not rewarded , thus leading to a negative revision of the food cup , as predicted by the model ., Future work that modifies food cup entries without manipulating ITI length and rewards uncertainty is necessary to determine the unique contributions that these factors play in goal-\/sign-tracking behavior and associated DA release ., Another explanation for increased sign tracking and DA release in the rats in the 120-s ITI group is the possibility that they learned faster than rats in the 60-s ITI group because of differing ratios between US presentations and the interval between the CS and US in that , the shorter the CS-US interval relative to the ITI , the faster the learning 19 ., In the context of our study it is difficult to determine which group learned faster ., Although rats in the 120-s ITI group did lever press more often early in training , rats in the 60-s ITI group made more anticipatory food cup entries during the cue period prior to reward delivery ., Furthermore , both food cup entries and lever pressing were present in the first behavioral session ( S4 Fig ) ., Thus , both groups appear to learn the CS-US relationship at similar speeds , but it is just that the behavior readout of learning differs across groups , making it difficult to determine which group learned the association faster ., In our opinion , our results suggest that rats in both groups learned at similar rates , much like sign and goal trackers do; however , future experiments and iterations of the model are necessary to determine what role the US-US\/CS-US ratio plays in sign\/goal tracking and corresponding DA release ., Standard RL 20 is a widely used normative framework for modelling learning experiments 21 , 22 ., To account for a variety of observations suggesting that multiple valuation processes coexist within the brain , two main classes of models have been proposed: MB and MF models 23 , 24 ., MB systems employ an explicit , although approximate , internal model of the consequences of actions , which makes it possible to evaluate situations by forward inference ., Such systems best explain goal-directed behaviors and rapid adaptation to novel or changing environments 25\u201328 ., In contrast , MF systems do not rely on internal models but directly associate stored ( cached ) values with actions or states based on experience , such that higher valued situations are favored ., Such systems best explain habits and persistent behaviors 28\u201330 ., Learning in MF systems relies on a computed reinforcement signal , the RPE ( actual minus predicted reward ) ., This signal has been shown to correlate with the phasic response of midbrain DA neurons that increase and decrease firing to unexpected appetitive and aversive events , respectively 9 , 31 ., Recent work by Flagel and colleagues 4 has questioned the validity of classical MF RL methods in Pavlovian conditioning experiments ., Their autoshaping procedure reported in that article was nearly identical to the one presented here in that a retractable-lever CS was presented for 8 s , followed immediately by delivery of a food pellet into an adjacent food cup ., The only major difference was that the length of the ITI in their study was 90 s ., In their study , they showed that in STs , phasic DA release in the NAc matched RPE signaling ., That is , the DA burst to reward that was present early in learning transferred to the cue after learning ., They also showed that DA transmission was necessary for the acquisition of sign tracking ., In contrast , despite the fact that GTs acquired a Pavlovian conditioned approach response , this was not accompanied by the expected RPE-like DA signal , nor was the acquisition of the goal-tracking conditioned response blocked by administration of a DA antagonist ( see also Danna and Elmer 10 ) ., To account for these and other results , Khamassi and colleagues 1 proposed a new computational model\u2014the STGT model\u2014that explains a large set of behavioral , physiological , and pharmacological data obtained from studies on individual variation in Pavlovian conditioned approach 2\u20138 ., Importantly , the model can reproduce previous experimental data by postulating that both MF and MB learning mechanisms occur during behavior , with simulated interindividual variability resulting from a different weight associated with the contribution of each system ., The model accounts for the full spectrum of observed behaviors ranging from one extreme\u2014from sign tracking associated with a small contribution of the MB system in the model\u2014to the other\u2014goal tracking associated with a high contribution of the MB system in the model 12 ., Above all , by allowing the MF system to learn different values associated with different stimuli , depending on the level of interaction with those stimuli , the model potentially explains why the lever CS and the food cup might acquire different motivational values in different individuals , even when they undergo the same training in the same task 26 ., The STGT model explains why the RPE-like dopaminergic response was observed in STs but not GTs\u2014the proposition being that GTs would focus on the reward-predictive value of the food cup , which would have been down-regulated during the ITI ., Furthermore , the STGT explains why inactivating DA in the core of the nucleus accumbens or in the entire brain results in blocking specific components and not others ., Here , the model proposes that learning in GTs relies more heavily on the DA-independent MB system , and thus DA blockade would not impair learning in these individuals 4 , 8 ., More importantly , the model has led to a series of new experimentally testable predictions that assess and strengthen the proposed computational theory and allow for a better understanding of the DA-dependent and DA-independent mechanisms underlying interindividual differences in learning 1 , 13 ., The key computational mechanism in the model is that both the approach and the consumption-like engagement observed in sign trackers ( STs ) on the lever and in goal trackers ( GTs ) on the food cup result from the acquisition of incentive salience by these reward-predicting stimuli ., Acquired incentive salience is stimulus specific: stimuli most predictive of reward will be the most \u201cwanted\u201d by the animal ., The MF system attributes accumulating salience to the lever or the food cup as a function of the simulated DA phasic signals ., In the model simulations , because the food cup is accessible but not rewarding during the ITI , a simulated negative DA error signal occurs each time the animal visits the food cup and does not find a reward ., The food cup therefore acquires less incentive salience compared with the lever , which is only presented prior to reward delivery ., In simulated STs , behavior is highly subject to incentive salience because of a higher weight attributed to the MF system than to the MB","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Recent computational models of sign tracking ( ST ) and goal tracking ( GT ) have accounted for observations that dopamine ( DA ) is not necessary for all forms of learning and have provided a set of predictions to further their validity ., Among these , a central prediction is that manipulating the intertrial interval ( ITI ) during autoshaping should change the relative ST-GT proportion as well as DA phasic responses ., Here , we tested these predictions and found that lengthening the ITI increased ST , i . e . , behavioral engagement with conditioned stimuli ( CS ) and cue-induced phasic DA release ., Importantly , DA release was also present at the time of reward delivery , even after learning , and DA release was correlated with time spent in the food cup during the ITI ., During conditioning with shorter ITIs , GT was prominent ( i . e . , engagement with food cup ) , and DA release responded to the CS while being absent at the time of reward delivery after learning ., Hence , shorter ITIs restored the classical DA reward prediction error ( RPE ) pattern ., These results validate the computational hypotheses , opening new perspectives on the understanding of individual differences in Pavlovian conditioning and DA signaling .","summary":"In classical or Pavlovian conditioning , subjects learn to associate a previously neutral stimulus ( called \u201cconditioned\u201d stimulus; for example , a bell ) with a biologically potent stimulus ( called \u201cunconditioned\u201d stimulus; for example , a food reward ) ., In some animals , the incentive salience of the conditioned stimuli is so strong that the conditioned response is to engage the conditioned stimuli instead of immediately approaching the food cup , where the predicted food will be delivered ., These animals are referred to as \u201csign trackers . \u201d, Other animals , referred to as \u201cgoal trackers , \u201d proceed directly to the food cup upon presentation of the conditioned stimulus to obtain reward ., Understanding the mechanisms by which these divergent behaviors develop under identical environmental conditions will provide powerful insight into the neurobiological substrates underlying learning ., Here , we test predictions made by a recent computational model that accounts for a large set of studies examining goal-\/sign-tracking behavior and the role that dopamine plays in learning ., We show that increasing the duration of the time between trials leads more to the development of a sign-tracking response and to the release of dopamine in the nucleus accumbens ., During conditioning with shorter intertrial intervals , goal tracking was more prominent , and dopamine was released upon presentation of the conditioned stimulus but not during the time of reward delivery after training ., Thus , shorter intertrial intervals restored the classical dopamine reward prediction error pattern ., Our results validate the computational hypothesis and open the door for understanding individual differences to classical conditioning .","keywords":"learning, medicine and health sciences, neurochemistry, chemical compounds, classical conditioning, vertebrates, social sciences, conditioned response, neuroscience, animals, mammals, learning and memory, organic compounds, hormones, animal models, surgical and invasive medical procedures, model organisms, cognitive psychology, mathematics, functional electrical stimulation, probability distribution, membrane electrophysiology, experimental organism systems, amines, neurotransmitters, bioassays and physiological analysis, catecholamines, dopamine, research and analysis methods, animal studies, behavior, chemistry, electrophysiological techniques, short reports, probability theory, biochemistry, behavioral conditioning, rodents, psychology, eukaryota, electrode recording, organic chemistry, biogenic amines, biology and life sciences, physical sciences, cognitive science, amniotes, organisms, rats","toc":null} +{"Unnamed: 0":1893,"id":"journal.pgen.1007400","year":2018,"title":"Spontaneous gain of susceptibility suggests a novel mechanism of resistance to hybrid dysgenesis in Drosophila virilis","sections":"Transposable elements are selfish elements that have the capacity to proliferate in genomes even if they are harmful 1 ., In response to this threat , mechanisms of small-RNA based silencing have evolved to limit TE proliferation ., In the germline of animals , Piwi-interacting RNAs ( piRNAs ) function to maintain TE repression through both transcriptional and post-transcriptional silencing 2 ., Critically , the epigenetic and transgenerational nature of piRNA-mediated TE control has been revealed by syndromes of hybrid dysgenesis ( HD ) 3 , 4 ., HD is a syndrome of TE-mediated sterility that occurs when males carrying active copies of TEs are crossed with females where such copies are rare or absent 5\u20137 ., The hybrid dysgenesis syndrome ( HD ) is defined as a combination of various genetic disorders such as genic mutations and chromosomal aberrations that lead to sterility in the progeny of intraspecific crosses 5\u20137 ., Sterility during HD is mediated by mobilization of certain TE families carried by the paternal genome and absent in the maternal genome 6 , 7 ., To date , there are several independent HD systems in Drosophila melanogaster ., The most well described are the I-R and P-M systems , controlled by the I-element ( a non-LTR ( long terminal repeat ) retrotransposon ) and the P-element ( a DNA transposon ) , respectively 6\u20138 ., Activation of paternally inherited TEs is explained by the fact that only the female maintains transgenerational TE repression via piRNAs transmitted through maternal deposition ., When the female genome lacks certain TE families , female gametes also lack piRNAs that target these families ., Thus , TE families solely transmitted through the male germline become de-repressed in the absence of repressive piRNAs inherited from the mother 2\u20134 , 9 ., HD in D . virilis was initially observed when males of laboratory strain 160 and females of wild-type strain 9 were crossed ., The F1 progeny exhibited up to 60% sterility , while sterility in the progeny of reciprocal crosses did not exceed 5\u20137% 10 ., Similar to the D . melanogaster P-M system , the sterility of hybrids from dysgenic crosses is apparently the result of abnormal development ( atrophy ) of male and female gonads 10\u201312 ., By analogy with the P-M system , strain 160 and strain 9 were called \u201cP-like\u201d ( P ) and \u201cM-like\u201d ( M ) , respectively ., In contrast to I-R and P-M systems , the study of HD in D . virilis has demonstrated that multiple unrelated TEs belonging to different families are mobilized in dysgenic progeny 13\u201316 ., The TEs presumably causal of dysgenesis and absent in M-like strain 9 include Penelope ( a representative of the Penelope-like element ( PLE ) superfamily ) , Paris and Polyphemus ( DNA transposons ) , as well as a non-LTR retrotransposon Helena 13\u201316 ., A typical M-like strain 9 contains only diverged inactive remnants of these TEs ., Additionally , piRNAs targeting Penelope , Paris , Polyphemus and Helena are highly abundant in the germline of strain 160 and are practically absent in strain 9 17 , 18 ., Thus , it has been suggested that the combined activity of these four asymmetric TEs , present only in strain 160 , underlies gonadal atrophy and other manifestations of HD in D . virilis ., This large asymmetry in TE abundance between strains suggests that HD in D . virilis may be considered a model for understanding the consequences of intermediate divergence in TE profiles within a species ., Nonetheless , recent studies have called into question whether the standard model of HD\u2013described in D . melanogaster where sterility is caused by the absence of maternal piRNAs that target specific inducing TE families\u2014applies in D . virilis 3 , 4 , 18 , 19 ., This is because several \u201cneutral\u201d ( N ) strains exhibit \u201cimmunity\u201d to HD in dysgenic crosses but lack maternal piRNA corresponding to Penelope elements , the presumptive primary driver of dysgenesis 19 ., If Penelope is a key driver of dysgenesis , how do neutral strains exhibit immunity in the absence of maternally transmitted Penelope piRNA ?, Two fundamental issues arise ., First , as observed in D . melanogaster , is there a single major element that serves as a key driver of HD in D . virilis ?, Second , do N-strains confer their resistance to HD solely through maternally provisioned piRNA or through alternate mechanisms ?, Despite significant progress in understanding the morphogenetic events occurring during gametogenesis and embryogenesis in the progeny of D . virilis dysgenic crosses , these questions still need to be answered 11 , 18 ., To answer these questions , by using small RNA deep-sequencing and qPCR , we decided to perform a comparative survey of maternal piRNA profiles across several \u201cneutral\u201d strains of different origin that did not quite fit the HD paradigm developed in the previous studies of this phenomenon 3 , 4 , 9 , 19 ., Additionally , we developed transgenic strains containing a presumptive causative TE and did not detect a cytotype change after its propagation in the genome ., The accumulated data failed to pinpoint a single TE or specific set of TEs responsible for their \u201cimmunity\u201d and support a model in which resistance to TE-mediated sterility during dysgenesis may be achieved by a mechanism that varies across strains ., We thus propose an alternate model to explain resistance to TE mediated sterility in D . virilis ., Instead of solely being explained by maternal piRNAs that target inducing TE families , the chromatin profile of repeats in the maternal genome may confer general immunity to the harmful effects of TE mobilization ., To characterize the piRNA profiles across diverse strains that vary in resistance to HD , we performed small RNA sequencing on six D . virilis strains obtained from various sources ( see Materials and Methods ) and maintained in our laboratory for more than 20 years ., These strains exhibit different levels of gonadal atrophy when crossed with males of P-like strain 160 ., Two of them ( 9 and 13 ) represent strong M-strains ( they exhibit up to 65% of gonadal atrophy in the F1 progeny of the dysgenic cross ) and four ( 140 , Argentina , Magarach and 101 ) behave as \u201cneutral\u201d or N-strains when crossed with strain 160 males and , hence , did not exhibit gonadal atrophy ( less than 10% atrophied gonads ) in such crosses ( Fig 1 ) ., Previous studies suggest Penelope element as a key driver of HD in D . virilis 15 , 20 , 21 ., However , while N-strains 140 and Argentina both carry Penelope elements , two other N-strains\u2013Magarach and 101 contain neither functional Penelope copies nor Penelope-derived small RNAs 19 ., This observation questions the key role of Penelope as a factor determining HD in D . virilis and suggests that piRNAs targeting other asymmetric TEs , e . g . Polyphemus , Helena and possibly Paris , may provide immunity to HD 14 , 15 , 17 , 21 , 22 ., To explore this possibility we performed a comparative analysis of both classes of small RNAs ( piRNAs and siRNAs ) in the ovaries of all selected M- and N-strains using the extended list of TEs and other repeats recently defined in D . virilis genome 18 ., This analysis indicates that the total repertoire of targets for small RNA silencing in strain 160 ( P ) is significantly higher than in all other studied strains ( Figs 2A , 2B , S1A and S1B ) ., Surprisingly , the global piRNA profile for known D . virilis TEs and other repeats is more similar between strain 160 ( P ) and M-strains ( R ( 160:9 ) = 0 . 83; R ( 160:13 ) = 0 . 74 , Spearman\u2019s correlation coefficient ) than between strain 160 ( P ) and several N-strains ( R ( 160:140 ) = 0 . 71; R ( 160:101 ) = 0 . 7 ) ( Fig 2A and 2B ) ., This suggests the possibility that protection is not mediated by a general maternal piRNA profile , but rather to certain specific TEs yet to be identified ., To identify such candidates , we compared sets of piRNA targets distinguishing strain 160 ( P ) from both typical M-strains , 9 and 13 , and obtained a list of ten TEs in common across comparisons ( Fig 2C ) ., These are TEs for which piRNAs are more abundant in strain 160 ( P ) when compared to both M-strains: Polyphemus , Penelope , Paris , Helena , Uvir , Skippy , 190 , 463 , 608 , and 1012 ., However , comparing 160 ( P ) and N-strains , we find that piRNAs from Helena and Skippy are uniquely found at high levels in strain 160 ( P ) ., Thus , if neutrality is conferred by piRNAs that uniformly target the same TE family or families , Helena and Skippy piRNAs are not likely to be required to prevent HD ., However , among the eight remaining candidates , there is no shared family among the neutral strains ( N-strains and 160 ( P ) ) that have a piRNA profile that is similar across strains ., For example , in contrast to 160 ( P ) , Penelope-derived piRNAs are more lowly expressed in strain Magarach ( N ) , Polyphemus-targeted piRNAs are more lowly expressed in strain 101 ( N ) and , finally , Paris-related piRNAs are lowly expressed in strain Argentina ( N ) and in strain 101 ( N ) ( Fig 2D ) ., Thus , we failed to detect one candidate causative TE or combinations of certain TEs present in all neutral strains whose piRNAs guarantee immunity to HD ( Fig 2D ) ., This suggests the possibility that maternal protection in crosses with strain 160 ( P ) males may be conferred by different mechanisms across the strains ., A similar comparative analysis of siRNA expression between strain 160 ( P ) and M-strains demonstrated that siRNAs complementary to only Penelope and Helena elements are absent in the ovaries of strain 9 ( M ) and 13 ( M ) ( S1A and S1B Fig ) ., However , we detected Penelope-homologous siRNAs only in half of the studied neutral trains i . e . strains Argentina and 140 ( S1C Fig ) ., In the context of immunity to HD syndrome manifestations , probably the most important condition is to constantly maintain effective piRNA production in the germline ., It is well known that ovarian piRNA pools consist of molecules generated by primary and secondary processing mechanisms ., Due to germline expression of Ago3 and Aub proteins necessary for secondary processing ( \u201cping-pong\u201d amplification ) , the germline specific piRNA pool can be assessed quantitatively by counting of \u201cping-pong\u201d pairs 2 , 23 ., We analyzed the \u201cping-pong\u201d signature of piRNAs to the selected TEs and showed that these piRNA species contain ping-pong pairs in varying degrees ( S2 Fig ) ., Importantly , all of them exhibit a signature of secondary piRNA processing indicating that production of these piRNAs takes place in the germline but each element lacks such a ping-pong signature in at least one or more of the neutral strains ., In addition , Penelope expression was previously shown to be germline-specific by whole-mount RNA in situ hybridization 24 ., In the present study , using the same technique with the ovaries of P-strain 160 , we confirmed that Paris , Polyphemus and Helena elements exhibit germline-specific expression pattern as well ( S3 Fig ) ., We further examined the pattern of divergence among piRNAs that map to the consensus TEs since piRNAs derived from divergent sequences are likely derived from degraded TE insertions ., Among the selected HD-implicated TEs , the ovarian piRNA pool contains a very small amount of Paris-targeting piRNAs that were detected only in two studied N-strains\u2014140 and Magarach ., Interestingly , only 10% of both sense and antisense-oriented piRNAs apparently originate from modern active copies of Paris elements while the rest of the Paris-complementary piRNAs were produced from ancestral highly diverged ones ( S4 Fig ) ., The same applies to the Penelope-derived piRNAs in strain 101 ( N ) ., All other piRNA species to HD-implicated TEs , especially in the antisense-orientation , in all studied neutral strains were practically identical to the consensus and , hence , apparently originated from active copies of these elements ( S4 Fig ) ., This analysis further indicates that there is no active candidate inducer family , represented by sequence similar piRNAs , shared across all six neutral strains ., Overall , these data indicate that , in terms of piRNA-mediated protection to HD in D . virilis neutral strains , there is no general rule in the context of ovarian piRNAs complementary to particular TEs implicated in HD ., In other words , in neutral strains the maternally transmitted piRNA pool may include different amounts of piRNAs corresponding to various TEs and the repertoire of these TEs often radically differs between the strains with same cytotype ., Syndromes of HD are explained by maternal protection against paternal induction and Penelope has long been considered the primary driver of paternal induction 18 , 20 , 22 ., In the previous section we demonstrated that maternal piRNAs that target Penelope are not necessary to confer neutrality but , as neutrality may arise through different mechanisms , we sought to determine whether Penelope was either sufficient for induction or Penelope piRNA sufficient for protection ., We thus characterized a simulation of natural invasion through the analysis of two transgenic strains of D . virilis containing full-size Penelope copies introduced into a typical D . virilis M-like strain 9 ( the stock is assigned as w3 ) originally devoid of functional copies of this TE ., Our previous experiments demonstrated that introduced Penelope underwent active amplification and occupied more than ten sites in the chromosomes of the transgenic strains 19 ., However , at that time ( in 2012 ) we did not detect any Penelope-derived small RNA species in these transgenic strains ., Subsequent to the early analysis performed in 2011\u20132012 , we have now found that Penelope is actively transcribed in these two strains and exhibits steady-state RNA levels equal to or even higher than in strain 160 ( Fig 3A ) ., We further observed piRNAs in both transgenic strains , indicating that some of Penelope copies acquired the properties of piRNA-generating locus ( Fig 3B ) ., Thus , in strain Tf2 the level of piRNAs homologous to Penelope is only half as much as that observed in P-like strain 160 ., The analysis of Penelope-derived piRNAs indicates a distribution of piRNAs along the entire Penelope body and clear-cut ping-pong signature ( Fig 3B ) ., Similar to strain 160 , more than half of the Penelope-derived piRNAs in both strains originate from active and highly similar Penelope copies with few mismatches to the canonical sequence ( Fig 3C ) ., In contrast , Penelope piRNAs identified in the untransformed M-like strain 9 ( w3 ) are highly divergent and likely derive from inactivated Penelope copies ( termed \u201cOmegas\u201d ) located in heterochromatic regions of the genome 25 , 26 ., Interestingly , the pool of Penelope derived small RNAs in transgenic strains are primarily piRNAs ., This is in contrast to inducer strain 160 and D . melanogaster strains transformed with Penelope 19 , where Penelope-derived siRNAs are the major class ( S5 Fig ) ., Surprisingly , both transgenic strains containing multiple Penelope copies and abundant piRNAs behave exactly as the original M-like strain 9 in dysgenic crosses ( Fig 4 ) ., They neither have the capacity to induce HD paternally nor protect against HD maternally ., Therefore , the introduction of full-size Penelope into an M-like strain accompanied by its propagation , active transcription and piRNAs production was not sufficient to modify the cytotype ., These results also indicate that the presence of piRNA complementary to Penelope in the oocyte is not the only prerequisite to prevent gonadal sterility when crossed with males of P-like strain 160 ., Along these lines , it has been shown recently that the number of P-element and hobo copies per se has very little influence on gonadal sterility suggesting that HD is not determined solely by the dosage of HD-causative elements 27 ., The above results demonstrate that the maternal piRNAs that target all , or even most , asymmetric TEs that likely cause dysgenesis are not necessary to confer neutral strain status ( Fig 2 ) ., Furthermore , Penelope piRNAs are not sufficient for maternal protection and the presence of active Penelope copies is not sufficient for paternal induction ( Figs 3 and 4 ) ., This begs the question: What are the necessary and sufficient factors of HD in D . virilis ?, Among the analyzed strains , neutral strain 101 represents a special case ., This is due to the fact that the genome of this strain does not produce piRNAs to the most-described HD-implicated TEs , e . g . Paris , Helena , Polyphemus and a very small amount of divergent Penelope-homologous piRNAs ( Figs 2 and S4 ) ., In the course of our long-term monitoring of the gonadal atrophy observed in the progeny of dysgenic crosses involving P-like strain and various laboratory and geographical strains of D . virilis , we often observed significant variation in the level of sterility in the progeny of the same crosses occurring with time ., Strikingly , among these strains , we have identified a spontaneous change from neutral cytotype to M-like one ., Thus , while an old laboratory strain 101 kept in the Stock Center of Koltzov Institute of Developmental Biology RAS maintained a neutral cytotype for the whole period of observation ( 2011\u20132017 ) the same strain kept in our laboratory gradually became M-like strain ( Fig 5 ) ., We considered the possibility that this shift in cytotype could be explained by changes in the TE profile between the strains ., Surprisingly , Southern blot and PCR analyses demonstrate that 101 N- and M- substrains have identical TE profiles for Penelope , Paris , Polyphemus and Helena ( Figs 6A and S6 ) ., Additionally , qPCR analysis failed to detect any significant changes in the expression levels of the major asymmetric TEs as well as other described TEs in the compared variants ( neutral vs M-like ) of this strain ( Fig 6B ) ., These data rule out the possibility of strain contamination with a lab M-strain ., To understand the observed differences in the cytotype of strain 101 variants we performed additional small-RNA sequencing ., Indeed , the piRNA profile of strain 101 ( N ) has significantly higher piRNA levels ( compared to 101 ( M ) ) for five previously undescribed repeats ( 315 , 635 , 850 , 904 and 931 ) ( Fig 7A ) , indicating that differences in cytotype could be attributed to these repeats ., Among these piRNA species , only piRNAs targeting 315 and 635 elements comprise many ping-pong pairs and , hence , are generated predominantly by germline-specific secondary processing mechanism ( Fig 7B ) ., Based on sequence similarity to the TE consensus , at least 25% of antisense-oriented piRNA molecules apparently originated from modern active elements , with the exception of piRNAs targeting the 904-element ( S7A Fig ) ., Focusing on the three elements ( 315 , 635 , 850 ) with maximal piRNA expression levels , we compared both variants of strain 101 in more detail to determine if differences in repeat profile could explain differences in cytotype ., Element 315 encodes three open reading frames ( ORF ) ., According to the protein-domain structure , two ORFs appear to encode gag and pol genes ., The third ORF has no homology to the described TEs and possibly encodes an env gene ., Thus , element 315 probably represents a retroelement ., Since we failed to find any homology of the 315 element to the described families of TEs in Sophophora subgenus we propose that this element is an exclusive resident of Drosophila subgenus ., Element 635 has some homology to the Invader element of D . melanogaster , which belongs to the Gypsy family of LTR-containing retrotransposons ., However , it has no long terminal repeats ( LTRs ) in its sequence ., Finally , short 850 element ( 749 nt ) doesn\u2019t encode any ORF and seems to be non-autonomous ., Importantly , based on Southern blot and PCR analysis , these particular repeats did not undergo amplification in the neutral variant of strain 101 and both compared substrains exhibit identical restriction patterns of these elements , similar to that of P-like strain 160 ( S7B and S7C Fig ) ., Hence , the observed cytotype shift as well as the differences in piRNA pool to these elements apparently do not stem from differences in copy number among 101 substrains ., Interestingly , we observed a significant increase of expression levels of 315 and 635 elements ( p < 0 . 05; t-test ) , but not 850 , in the ovarian mRNA pool of M-like substrain 101 compared to the neutral substrain ( Fig 7C ) ., Overall , these results demonstrate that the capacity for these repeats to produce piRNAs is lower in the 101 ( M ) strain , even in the absence of movement ., What could lead to differences in the piRNA profile for these repeats between the 101 ( N ) and 101 ( M ) strains in the absence of movement ?, Studies of piRNA-generating loci in Drosophila revealed that the H3K9me3 mark , which serves as a binding site to recruit HP1a and its germline homolog Rhino , is required for transcription of dual-strand piRNA-clusters and transposon silencing in ovaries 2 , 28 , 29 ., We hypothesized that a shift of the chromatin state in strain 101 modified the ability of particular genomic loci , carrying 315 , 635 , 850 elements , to produce piRNA species ., These changes in piRNA profile may be an indication of a chromatin-based modification that may confer resistance to HD sterility in the neutral 101 substrain ., To test this hypothesis , we estimated the levels of H3K9me3 and HP1a marks by ChIP combined with qPCR analysis in the ovaries of two cytotype variants of strain 101 ., The analysis showed significant increase of H3K9me3 levels on genomic regions containing 315 , 635 and 850 elements ( enrichment > 2 . 5 , p < 0 . 05 ) as well as slight increase of HP1a enrichment in the neutral variant of strain 101 compared to the M-like substrain ( Figs 7D and S8 ) ., In turn , Ulysses carrying regions used as a control demonstrated equal levels of the H3K9me3 mark , consistent with Ulysses-targeting piRNA levels being almost equal in the strain 101 variants ( Fig 7D ) ., This indicates that certain repeats have experienced shift in their chromatin profile , but that this shift is not global ., A similar phenomenon has been recently described in I-R HD system in D . melanogaster 30 ., In that comparative analysis of two reactive strains ( weak and strong ) , it was shown that despite having a similar number of copies of the I-element , these strains significantly differ by enrichment of Rhino at the 42AB piRNA-cluster containing I-elements remnants ., Furthermore , a lower level of I-element targeted piRNA species was observed in the strong-reactive strain as a result 30 ., Given these differences , it is possible that these elements are the primary drivers of dysgenesis in D . virilis ., To further test the hypothesis that activation of these elements could contribute to HD , we compared first piRNA levels of all these elements in the ovaries of the F1 progeny from dysgenic-like and reciprocal crosses using variants of strain 101 and P-like strain 160 ., These experiments demonstrate that piRNAs targeting 315 , 635 , and 934 elements showed similar levels in the ovaries of F1 hybrids from dysgenic crosses ( 101 ( N ) x 160 ) and parental neutral strain 101 , but lower levels in progeny of reciprocal crosses where such piRNAs would not be maternal ( 160 x 101 ( N ) ) ( Fig 7E ) ., Thus , the maternally provisioned piRNAs complementary to 315 , 635 and 931 elements are required to stimulate the generation of the corresponding piRNAs in the progeny , as shown in other systems of HD 3 , 4 ., However , in the analysis of steady-state mRNA levels of these TEs in the ovaries of dysgenic and reciprocal progeny of crosses between 101 substrains and P-like strain 160 , we failed to obtain any induction of 315 , 635 and 850 elements exceeding their levels of parental strains ( Fig 7F and 7G ) ., On the contrary , the ovaries of F1 hybrids from the reciprocal ( non-dysgenic ) crosses involving strains 101 ( N ) males and 160 ( P ) females showed even significantly higher expression levels of these elements in comparison to dysgenic ones ., Moreover , the dysgenic and reciprocal hybrids of M-like substrain 101 and strain 160 ( P ) showed no differences in the mRNA levels of the studied elements ( Fig 7F and 7G ) ., These results indicate that activation of these elements per se is unlikely to be causative to HD because 101 ( N ) and 101 ( M ) have identical TE profiles ., We therefore considered the possibility that what distinguishes strain 101 ( N ) from 101 ( M ) may have an epigenetic basis or , alternately , an unknown genetic change that alters repeat chromatin ., If so , then lack of piRNAs to these elements in 101 ( M ) could explain the M-cytotype ., To test this , we compared piRNA levels and family level abundance with inducer strain 160 ( P ) ., Critically , none of these elements show increased piRNA levels in strain 160 ( P ) compared to strain 9 ( M ) ( Fig 7H ) ., Thus , asymmetry in the piRNA pool for these particular elements is not a necessary condition for dysgenesis ., According to the recent studies differences in parental expression levels of genic piRNAs may contribute to the dysgenic manifestations in the progeny 18 , 30 ., With this in mind , we compared the expression of genic piRNAs in the ovaries of both 101 substrains and did not observe significant differences in their levels ( S9A Fig ) ., Ping-pong of genic piRNA profiles are also exhibit high similarity between these strains ( S9B Fig ) ., Based on these data , we concluded that differences in genic piRNAs unlikely have impact on the observed cytotype shift ., Overall , we have shown that the enrichment of heterochromatic marks ( H3K9me3 and HP1a ) in the genomic regions containing 315 , 635 and 850 elements is significantly lower in M-like variant of strain 101 compared to neutral one ., Together , these data provide further evidence that the mechanism of maternal repression may significantly vary among strains ., However , additional experiments involving Rhino ChIP and genome sequencing of strain 101 are needed to clearly prove this assumption and identify the loci responsible for the enhanced piRNA production in one of the two 101 substrains ., One of the main consequences of activation of a particular asymmetric TE in the progeny of dysgenic crosses is their expression level excess compared to both paternal strains and reciprocal hybrids 3 , 15 , 18 , 31 ., Studies of the I-R syndrome of HD in D . melanogaster demonstrate higher expression of the I-element in the F1 progeny from dysgenic crosses compared to reciprocal ones 3 , 30 , 31 ., This is due to the maternal deposition of piRNAs targeting the I-element and its effective silencing in only one direction of the cross ., Additionally , various studies of HD systems , including the D . virilis syndrome , demonstrated that transgenerational inheritance of piRNAs is able to trigger piRNA expression in the next generation by changing the chromatin of piRNA-clusters due to paramutation 3 , 4 , 32\u201334 ., However , a pattern of higher TE expression in the absence of complementary maternal piRNA is less apparent in D . virilis ., Despite strain asymmetry in genomic content and piRNA abundance of Penelope and several other TEs , germline piRNA pools do not differ drastically between reciprocal F1 progeny , with the exception of Helena element 18 ., We therefore sought to determine whether this atypical pattern was also observed in crosses with other strains , focusing on asymmetric Penelope , Paris , Polyphemus and Helena as well as Ulysses present in all strains ., As expected , ovarian mRNA levels revealed a complete correspondence with the piRNA expression levels among strains ( Figs 8A , 2A and 2B ) ., For example , we detected both Penelope mRNA and piRNA expression in 140 ( N ) and Argentina ( N ) , but neither were evident in Magarach ( N ) and 101 ( N ) ., However , in all cases when females from M-like strains are crossed with strain 160 males , ovarian levels of expression are uniformly significantly higher for only one asymmetric TE\u2013Polyphemus ( fold change 3 , 5 , 3 . 5 , p < 0 . 05 , t-test , in dysgenic hybrids with strains 9 ( M ) , 13 ( M ) and 101 ( M ) , respectively ) ( Fig 8B and 8C ) ., In most cases the observed differences in expression for Penelope and Paris elements in the ovaries of dysgenic and reciprocal hybrids were not dramatic and when exist rarely exceed 1 . 5\u20132 fold ., Moreover , in the crosses involving neutral strains and strain 160 , we failed to detect any characteristic differences in TEs expression between reciprocal hybrids ( Fig 8B and 8C ) ., Thus , independent of maternal piRNA profile , all reciprocal crosses with neutral strains show similar levels of expression ., However , the two variants of strain 101 give different results when crossed with P-like strain 160 ., In spite of the fact that 101 substrains contain equal levels of piRNAs complementary to the HD-implicated TEs , in the case of the M-like variant we observed higher levels of expression in the dysgenic hybrids for Penelope ( fold change 3; p < 0 . 05 , t-test ) and Polyphemus ( fold change 3 . 5 , p < 0 . 05 , t-test ) ., Moreover , increase of Ulysses element ( found in all D . virilis strains ) expression ( fold change 3 , p < 0 . 05 , t-test ) was demonstrated in the dysgenic ovaries of 13 ( M ) and 160 ( P ) hybrids ( S10A Fig ) ., These results demonstrate that factors other than maternal piRNA abundance lead to variation in resident TE expression in crosses between strain 160 and 101 substrains ., For the neutral 101 strain , we failed to detect significant differences in the hybrids from both directions of crosses for any of TEs tested ( Fig 8B ) ., With the exception of a few TEs and repeats , piRNA abundance in the ovaries from dysgenic and reciprocal progeny exhibited no drastic differences including piRNAs complementary to asymmetric TEs ( Figs 8B and S11 ) ., Surprisingly , Helena , which maintains high level of asymmetry of the maternal pool of piRNAs in the progeny , exhibits very similar levels of correspondent mRNA expression in the hybrids obtained in both directions of crosses ( Fig 8B ) ., In spite of overall similarity , piRNA pools in the ovaries of F1 progeny are able to comprise significantly different number of ping-pong pairs to all of transposons studied ( S10B Fig ) ., For example , in the ovaries from dysgenic progeny ( strain 160 males ) with strains 9 ( M ) and Argentina ( N ) females , the number of ping-pong pairs to Penelope , Paris and Polyphemus was 2-3-fold lower than in the ovaries from reciprocal hybrids ( S10B Fig ) ., We have also found that enrichment of the H3K9me3 mark on Penelope , Paris , Polyphemus and Helena sequences does not differ significantly in the F1 progeny of dysgenic and reciprocal crosses ( S10C Fig ) ., Thus , we propose that piRNA-mediated transcriptional gene silencing of these HD-implicated TEs is similar in both directions of crosses and maternally provisioned piRNAs to these TEs are not necessary to stimulate the production of correspondent piRNA species in the progeny ., These results are in agreement with recently published data 18 ., In summary , it should be emphasized that in contrast to the I-R system in D . melanogaster , where maternal deposition of I-element piRNAs results in dramatic increase of piRNA expression targeting I-element in the progeny and efficient suppression of I-element activity , in D . virilis maternally provisioned piRNAs do not always guarantee efficient generation of the correspondent piRNAs in the progeny to maintain silencing of complementary TEs and provide adaptive genome defense ., We conclude that in D . virilis the determination of asymmetric TEs expression levels in the ovaries of the progeny from dysgenic and reciprocal crosses does not allow one to unambiguously assign causality for HD to specific TE families ., This fact points to an alternate mode of HD in D . virilis ., The standard explanation for the phenomenon of hybrid d","headings":"Introduction, Results and discussion, Conclusions, Materials and methods","abstract":"Syndromes of hybrid dysgenesis ( HD ) have been critical for our understanding of the transgenerational maintenance of genome stability by piRNA ., HD in D . virilis represents a special case of HD since it includes simultaneous mobilization of a set of TEs that belong to different classes ., The standard explanation for HD is that eggs of the responder strains lack an abundant pool of piRNAs corresponding to the asymmetric TE families transmitted solely by sperm ., However , there are several strains of D . virilis that lack asymmetric TEs , but exhibit a \u201cneutral\u201d cytotype that confers resistance to HD ., To characterize the mechanism of resistance to HD , we performed a comparative analysis of the landscape of ovarian small RNAs in strains that vary in their resistance to HD mediated sterility ., We demonstrate that resistance to HD cannot be solely explained by a maternal piRNA pool that matches the assemblage of TEs that likely cause HD ., In support of this , we have witnessed a cytotype shift from neutral ( N ) to susceptible ( M ) in a strain devoid of all major TEs implicated in HD ., This shift occurred in the absence of significant change in TE copy number and expression of piRNAs homologous to asymmetric TEs ., Instead , this shift is associated with a change in the chromatin profile of repeat sequences unlikely to be causative of paternal induction ., Overall , our data suggest that resistance to TE-mediated sterility during HD may be achieved by mechanisms that are distinct from the canonical syndromes of HD .","summary":"Transposable elements ( TE ) can proliferate in genomes even if harmful ., In response , mechanisms of small-RNA silencing have evolved to repress germline TE activity ., Syndromes of hybrid dysgenesis in Drosophila\u2014where unregulated TE activity in the germline causes sterility\u2014have also revealed that maternal piRNAs play a critical role in maintaining TE control across generations ., However , a syndrome of hybrid dysgenesis in D . virilis has identified additional complexity in the causes of hybrid dysgenesis ., By surveying factors that modulate hybrid dysgenesis in D . virilis , we show that protection against sterility cannot be entirely explained by piRNAs that control known inducer TEs ., Instead , spontaneous changes in the chromatin state of repeat sequences of the mother may also contribute to protection against sterility .","keywords":"sequencing techniques, invertebrates, medicine and health sciences, reproductive system, gene regulation, invertebrate genomics, animals, animal models, drosophila melanogaster, model organisms, experimental organism systems, molecular biology techniques, epigenetics, rna sequencing, drosophila, chromatin, research and analysis methods, small interfering rnas, genomics, artificial gene amplification and extension, chromosome biology, gene expression, comparative genomics, molecular biology, animal genomics, insects, arthropoda, ovaries, biochemistry, rna, eukaryota, anatomy, nucleic acids, cell biology, polymerase chain reaction, genetics, biology and life sciences, computational biology, non-coding rna, organisms","toc":null} +{"Unnamed: 0":1166,"id":"journal.pcbi.1003071","year":2013,"title":"Constraint and Contingency in Multifunctional Gene Regulatory Circuits","sections":"Gene regulatory circuits are at the heart of many fundamental biological processes , ranging from developmental patterning in multicellular organisms 1 to chemotaxis in bacteria 2 ., Regulatory circuits are usually multifunctional ., This means that they can form different metastable gene expression states under different physiological conditions , in different tissues , or in different stages of embryonic development ., The segment polarity network of Drosophila melanogaster offers an example , where the same regulatory circuit affects several developmental processes , including embryonic segmentation and the development of the flys wing 3 ., Similarly , in the vertebrate neural tube , a single circuit is responsible for interpreting a morphogen gradient to produce three spatially distinct ventral progenitor domains 4 ., Other notable examples include the bistable competence control circuit of Bacillus subtilis 5 and the lysis-lysogeny switch of bacteriophage lambda 6 ., Multifunctional regulatory circuits are also relevant to synthetic biology , where artificial oscillators 7 , toggle switches 8 , and logic gates 9 are engineered to control biological processes ., The functions of gene regulatory circuits are embodied in their gene expression patterns ., An important property of natural circuits , and a design goal of synthetic circuits , is that these patterns should be robust to perturbations ., Such perturbations include nongenetic perturbations , such as stochastic fluctuations in protein concentrations and environmental change ., Much attention has focused on understanding 1 , 2 , 4 , 10 , 11 and engineering 12\u201314 circuits that are robust to nongenetic perturbations ., Equally important is the robustness of circuit functions to genetic perturbations , such as those caused by point mutation or recombination ., Multiple studies have asked what renders biological circuitry robust to such genetic changes 15\u201320 ., With few exceptions 21 , 22 , these studies have focused on circuits with one function , embodied in their gene expression pattern ., Such monofunctional circuits tend to have several properties ., First , many circuits exist that have the same gene expression pattern 17\u201319 , 23\u201328 ., Second , these circuits can vary greatly in their robustness 16 , 18 , 29 ., And third , they can often be reached from one another via a series of function-preserving mutational events 18 , 19 , 30 ., Taken together , these observations suggest that the robustness of the many circuits with a given regulatory function can be tuned via incremental mutational change ., Most circuits have multiple functions , but how these observations translate to such multifunctional circuits is largely unknown ., In a given space of possible circuits , how many circuits exist that have a given number of k specific functions ( expression patterns ) ?, What is the relationship between this number of functions and the robustness of each function ?, Do circuits with any combination of functions exist , or are some combinations \u201cprohibited ? \u201d, Pertinent earlier work showed that there are indeed fewer multifunctional circuits than monofunctional circuits 21 , but this investigation had two main limitations ., First , it considered circuits so large that the space of circuits and their functions could not be exhaustively explored , and restricted itself to mostly bifunctional circuits ., Second , it included only topological circuit variants ( i . e . , who interacts with whom ) , and ignored variations in the signal-integration logic of cis-regulatory regions ., These regions encode regulatory programs , which specify the input-output mapping of regulatory signals ( input ) to gene expression pattern ( output ) 31\u201333 ., Variations in cis-regulatory regions 34 , such as mutations that change the spacing between transcription factor binding sites 35 , are known to impact circuit function 36 , 37 , and their inclusion in a computational model of regulatory circuits is thus important ., Here , we overcome these limitations by focusing on regulatory circuits that are sufficiently small that an entire space of circuits can be exhaustively explored ., Specifically , we focus on circuits that comprise only three genes and all possible regulatory interactions between them ., Small circuits like this play an important role in some biological processes ., Examples include the kaiABC gene cluster in Cyanobacteria , which is responsible for circadian oscillations 38 , the gap gene system in Dropsophila , which is responsible for the interpretation of morphogen gradients during embryogenesis 19 , and the krox-otx-gatae feedback loop in starfish , which is necessary for endoderm specification 39 ., Additionally , theoretical studies of small regulatory circuits have provided several general insights into the features of circuit design and function ., Examples include biochemical adaptation in feedback loops 40 and response delays in feed-forward loops 41 , among others 16 , 19 , 23 , 42\u201345 ., Lastly , there is a substantial body of evidence suggesting that small regulatory circuits form the building blocks of larger regulatory networks 34 , 46\u201348 , further warranting their study ., For two reasons , we chose Boolean logic circuits 49 as our modeling framework ., First , they allow us not only to vary circuit topology 45 , but also a circuits all-important signal-integration logic 44 ., Second , Boolean circuits have been successful in explaining properties of biological circuits ., For example , they have been used to explain the dynamics of gene expression in the segment polarity genes of Drosophila melanogaster 50 , the development of primordial floral organ cells of Arabidopsis thaliana 51 , gene expression cascades after gene knockout in Saccharomyces cerevisiae 52 , and the temporal and spatial expression dynamics of the genes responsible for endomesoderm specification in the sea urchin embryo 53 ., We consider a specific gene expression pattern as the function of a circuit like this , because it is this pattern that ultimately drives embryonic pattern formation and physiological processes ., Multifunctional circuits are circuits with multiple gene expression patterns , and here we study the constraints that multifunctionality imposes on the robustness and other properties of regulatory circuits ., The questions we ask include the following:, ( i ) How many circuits have a given number k of functions ?, ( ii ) What is the relationship between multifunctionality and robustness to genetic perturbation ?, ( iii ) Are some multifunctional circuits more robust than others ?, ( iv ) Is it possible to change one multifunctional circuit into another through a series of small genetic changes that do not jeopardize circuit function ?, We consider circuits of genes ( Fig . 1A ) ., We choose a compact representation of a circuits genotype G that allows us to represent both a circuits signal-integration logic and its architecture by a single binary vector of length ( Fig . 1B ) ., Changes to this vector can be caused by mutations in the cis-regulatory regions of DNA ., Such mutations may alter the binding affinity of a transcription factor to its binding site , thereby creating or removing a regulatory interaction 34 ., Alternatively , they may affect the distance of a transcription factor binding site from the transcription start site , changing its rotational position on the DNA helix ., In turn , this may alter the regulatory effect of the transcription factor 54 , and change the downstream genes signal-integration logic ., Lastly , such mutations may change the distance between adjacent transcription factor binding sites , enabling or disabling a functional interaction between proximally bound transcription factors 35 ., We note that mutations in G could also be conceptualized as changes in the DNA binding domain of a transcription factor ., However , evolutionary evidence from microbes suggest that alterations in the structure and logic of regulatory circuits occurs preferentially via changes in cis-regulatory regions , rather than via changes in the transcription factors that bind these regions 55 ., The dynamics of the expression states of a circuits N genes begin with a prespecified initial state , which represents regulatory influences outside or upstream of the circuit , such as transcription factors that are not part of the circuit but can influence its expression state ., The initial state reflects the fact that small circuits are typically embedded in larger regulatory networks 34 , 46\u201348 , which provide the circuit with different regulatory inputs under different environmental or tissue-specific conditions ., Through the regulatory interactions specified in the circuits genotype , the circuits gene expression state changes from this initial state , until it may reach a stable ( i . e . , fixed-point ) equilibrium state ., We consider a circuits function to be a mapping from an initial expression state to an equilibrium expression state ( Fig . 1C ) ., In the main text , we consider only circuit functions that involve fixed point equilibria , but we consider periodic equilibrium states in the Supporting Online Material ., A circuit could in principle have as many as functions , as long as the initial expression states are all different from one another , and the equilibrium expression states are all different from one another ( Material and Methods ) ., The circuits we study may map multiple initial states to the same equilibrium state , but our definition of function ignores all but one of these initial states ., While a definition of function that includes many-to-one mappings between initial and equilibrium states can be biologically sensible , our intent is to investigate specific pairs of inputs ( i . e . , ) and outputs ( i . e . , ) , as is typical for circuits in development and physiology 56\u201358 ., We emphasize that a circuit can express its k functions individually , or in various combinations , such that the same circuit could be said to have between one and k functions ., For brevity , we refer to a specific set of k functions as a multifunction or a k-function and to circuits that have at least one function as viable ., The space of circuits we explore here contains possible genotypes ., We exhaustively determine the equilibrium expression states of each genotype for all initial states , thereby providing a complete genotype-to-phenotype ( function ) map ., We use this map to partition the space of genotypes into genotype networks 17\u201319 , 21 ., A genotype network consists of a single connected set of genotypes ( circuits ) that have identical functions , and where two circuits are connected neighbors if their corresponding genotypes differ by a single element ( Fig . 1D ) ., Note that such single mutations may correspond to larger mutational changes in the cis-regulatory regions of DNA ., For example , mutations that change the distance between binding sites , or between a binding site and a transcription start site , may involve the addition or deletion of large segments of DNA 26 , 59\u201362 ., We first asked how the number of genotypes that have k functions depends on k ., Fig . 2 shows that this number decreases exponentially , implying that multifunctionality constrains the number of viable genotypes severely ., For instance , increasing k from 1 to 2 decreases the number of viable genotypes by 34%; further increasing k from 2 to 3 leads to an additional 39% decrease ., However , there is always at least one genotype with a given number k of functions , for any ., In other words , even in these small circuits , multiple genotypes exist that have many functions ., Thus far , we have determined the number of genotypes with a given number k of functions , but we did not distinguish between the actual functions that these genotypes can have ., For example , there are 64 variants of function , since there are potential initial states and potential equilibrium states ( ) ., Analogously , simple combinatorics ( Text S1 ) shows that there are 1204 variants of functions , and the number of variants increases dramatically with greater k , up to a maximum of variants of functions ., This is possible because individual functions can occur in different possible combinations in multifunctional circuits ( Material and Methods ) ., The solid line in the inset of Fig . 2 indicates how this number of possible different functions scales with k ., We next asked whether there exist circuits ( genotypes ) for each of these possible combinations of functions , or whether some multifunctions are prohibited ., The open circles in the inset of Fig . 2 show the answer: These circles lie exactly on the solid line that indicates the number of possible combinations of functions for each value of k ( Text S1 ) ., This means that no multifunction is prohibited ., In other words , even though multifunctionality constrains the number of viable genotypes , there is always at least one genotype with k functions , and in any possible combination ., As gene regulatory circuits are often involved in crucial biological processes , their functions should be robust to perturbation ., We therefore asked whether the constraints imposed by multifunctionality also impact the robustness of circuits and their functions ., In studying robustness , we differentiate between the robustness of a genotype ( circuit ) and the robustness of a k-function ., We assess the robustness of a genotype as the proportion of all possible single-mutants that have the same k-function , and the robustness of a k-function as the average robustness of all genotypes with that k-function 17 , 18 , 51 , 63 ( Material and Methods ) ., We refer to the collection of genotypes with a given k-function as a genotype set , which may comprise one or more genotype networks ., We emphasize that a genotype may be part of several different genotype sets , because genotypes typically have more than one k-function ., Fig . 3A shows that the robustness of a k-function decreases approximately linearly as k increases , indicating a trade-off between multifunctionality and robustness ., However , some degree of robustness is maintained so long as ., For larger k , some functions exist that have zero robustness ( Text S1 ) , that is , none of the circuits with these functions can tolerate a change in their regulatory genotype ., The inset of Fig . 3A reveals a similar inverse relationship between the size of a genotype set and the number of functions k , implying that multifunctions become increasingly less \u201cdesignable\u201d 64 \u2014 fewer circuits have them \u2014 as k increases ( Text S1 ) ., For example , for as few as functions , the genotype set may comprise a single genotype , reducing the corresponding robustness of the k-function to zero ., For each value of k , the maximum proportion of genotypes with a given k-function is equal to the square of the maximum proportion of genotypes with a function , explaining the triangular shape of the data in the inset ., This triangular shape indicates that the genotype set of a given k-function is always smaller than the union of the k constituent genotypes sets ., Additionally , we find that the robustness of a k-function and the size of its genotype set are strongly correlated ( Fig . S1 ) , indicating that the genotypes of larger genotype sets are , on average , more robust than those of smaller genotype sets ., This result is not trivial because the structure of a genotype set may change with its size ., For example , large genotype sets may comprise many isolated genotypes , or their genotype networks might be structured as long linear chains ., In either case , the robustness of a k-function would decrease as the size of its genotype set increased ., We have so far focused on the properties of the genotype sets of k-functions , but have not considered the properties of the genotype networks that make up these sets ., Therefore , we next asked how genotypic robustness varies across the genotype networks of k-functions ., In Figs ., 3B\u2013D , we show the distributions of genotypic robustness for representative genotype networks with functions ., These distributions highlight the inherent variability in genotypic robustness that is present in the genotype networks of multifunctions , indicating that genotypic robustness is an evolvable property of multifunctional circuits ., Indeed , in Fig . S2 , we show the results of random walks on these genotype networks , which confirm that it is almost always possible to increase genotypic robustness through a series of mutational steps that preserve the k-function ., In Fig . S3 , we show in which dynamic regimes ( Material and Methods ) the circuits in these same genotype networks lie ., We have shown that the genotype set of any k-function is non-empty ( Fig . 2 ) , meaning that there are no \u201cprohibited\u201d k-functions ., We now ask how the genotypes with a given k-function are organized in genotype space ., More specifically , is it possible to connect any two circuits with the same k-function through a sequence of small genotypic changes where each change in the sequence preserves this k-function ?, In other words , are all genotypes with a given k-function part of the same genotype network , or do such genotypes occur on multiple disconnected genotype networks ?, Fig . 4 shows the relationship between the number of genotype networks in a genotype set and the number of circuit functions k ., For monofunctional circuits ( ) , the genotype set always consists of a single , connected genotype network ., This implies that any genotype in the genotype set can be reached from any other via a series of function-preserving mutational events ., In contrast , for circuits with functions , the genotype set often fragments into several isolated genotype networks , indicating that some regions of the genotype set cannot be reached from some others without jeopardizing circuit function ., The most extreme fragmentation occurs for functions , where some genotype sets break up into more than 20 isolated genotype networks ., Fig . S4 provides a schematic illustration of how fragmentation can occur in a k-functions genotype set , despite the fact that the genotype sets of the k constituent monofunctions consist of genotype networks that are themselves connected ., Fig . S5 provides a concrete example of fragmentation , depicting one genotype from each of the several genotype networks of a bifunctions genotype set ., The proportion of k-functions with genotype sets that comprise a single genotype network is shown in the inset of Fig . 4 ., This proportion decreases dramatically as the number of functions increases from to , such that only 16% of genotype sets comprise a single genotype network when ., Figs ., 4B\u2013D show that the distributions of the number of genotype networks per genotype set are typically left-skewed ., This implies that when fragmentation occurs , the genotype set usually fragments into only a few genotype networks ., However , the distribution of genotype network sizes across all genotype sets is heavy-tailed and often spans several orders of magnitude ( Fig . S6 ) ., This means that the number of genotypes per genotype network is highly variable ., We next ask whether the number of genotypes in the genotype set of a k-function can be predicted from the number of genotypes in the genotype sets of the k constituent monofunctions ., To address this question , we define the fractional size of a genotype set as the number of genotypes in the set , divided by the number of genotypes in genotype space ., We first observe that the maximum fractional size of a genotype set of a k-function is equal to ( Fig . S6 ) , which is the maximum fractional size of a genotype set for monofunctional circuits 44 raised to the kth power ., In general , we find that the fractional size of a genotype set of a k-function can be approximated with reasonable accuracy by the product of the fractional sizes of the genotype sets of the k constituent monofunctions , but that the accuracy of this approximation decreases as k increases ( Fig . S7 ) ., While these fractional genotype set sizes may be quite small , we note that their absolute sizes are still fairly large , even in the tiny circuits considered here ., For example , for functions the maximum genotype set size is 262 , 144 ., For functions , the maximum is 32 , 768 ., In evolution , a circuit may acquire a new regulatory function while preserving its pre-existing functions ., An example is the highly-conserved hedgehog regulatory circuit , which patterns the insect wing blade ., In butterflies , this regulatory circuit has acquired a new function ., It helps form the wings eyespots , an antipredatory adaptation that arose after the insect body plan 65 ., This example illustrates that a regulatory circuit may acquire additional functions incrementally via gradual genetic change ., The order in which the mutations leading to a new function arise and go to fixation can have a profound impact upon the evolution of such phenotypes 66 ., In particular , early mutations have the potential to influence the phenotypic effects of later mutations , which can lead to a phenomenon known as historical contingency ., We next ask whether it is possible for a circuit to incrementally evolve regulatory functions in any order , or whether this evolutionary process is susceptible to historical contingency ., In other words , is it possible that some sequence of genetic changes that lead a circuit to have k functions also preclude it from gaining an additional function ?, The genotype space framework allows us to address this question in a systematic way , because it permits us to see contingency as a result of genotype set fragmentation ., Specifically , contingency means that , as a result of fragmentation , the genotype network of a new function may become inaccessible from at least one of the genotype networks of a k-functions genotype set ., To ask whether this occurs in our model regulatory circuits , we considered all permutations of every k-function ., These permutations reflect every possible order in which a circuit may acquire a specific combination of k functions through a sequence of genetic changes ., To determine the frequency with which historical contingency occurs , we calculate the number of genotype networks per genotype set , as the k functions are incrementally added ., This procedure is outlined in Fig . S4 and detailed in the Material and Methods section ., We note that historical contingency is not possible when because all monofunctions comprise genotype sets with a single connected genotype network ., Historical contingency is also not possible when , because there is only one genotype that yields this combination ( Fig . 2 ) ., In Fig . 5 , we show the relationship between the proportion of k-functions that exhibit historical contingency and the number of functions k ., For as few as functions , 43% of all k-functions exhibit historical contingency ., This percentage is highest for , where 94% of combinations are contingent ., The inset of Fig . 5 shows the proportion of the permutations of a k-function in which genotype set fragmentation may preclude the evolution of the k-function ., Again , this proportion is highest for functions ., These results highlight an additional constraint of multifunctionality ., Not only does the number of genotypes with k functions decrease as k increases , but the dependence upon the temporal order in which these functions evolve tends to increase ., In the Supporting Online Material , we repeat the above calculations to show how our results scale to equilibrium expression states with period ( For the sake of computational tractability , we restrict our attention to the case where all equilibrium expression states have the same period P ) ., We show that the exponential decrease in the number of circuits with k functions also holds for periodic equilibrium expression states , but that the maximum number of functions per circuit decreases with increasing ( Fig . S8 ) ., So long as , it is possible for a circuit to have more than one function ., In this case , the inverse relationship between robustness to genetic perturbation and the number of functions k also holds ( Fig . S9 ) ., Similarly , the results pertaining to genotype set fragmentation hold so long as ( Fig . S10 ) ., Lastly , the results pertaining to historical contingency only hold when ., This is because it is not possible for a circuit with an equilibrium expression pattern of period to have more than functions , which is a prerequisite for historical contingency ( Material and Methods ) ., Taken together , these additional observations show that the results obtained for fixed-point equilibrium expression states can also apply to periodic equilibrium expression states , so long as is not too large ., We have used a Boolean model of gene regulatory circuits to exhaustively characterize the functions of all possible combinations of circuit topologies and signal-integration functions in three-gene circuits ., The most basic question we have addressed is whether multifunctionality is easy or difficult to attain in regulatory circuits ., Our results show that while the number of circuits with k functions decreases sharply as k increases , there are generally thousands of circuits with k functions , so long as k is not exceedingly large ., Thus , multifunctionality is relatively easy to attain , even in the tiny circuits examined here ., It is worth considering how this result might translate to larger circuits ., In a related model of gene regulatory circuits with genes , the genotype sets of bifunctions comprised an average of circuits 21 , which is over an order of magnitude more circuits per bifunction than observed here ( Fig . 3 , inset ) ., For a greater number of functions k , we expect the number of circuits per k-function to increase as the number of genes N in the regulatory circuit increases ., This is because the maximum number of circuits with a given k-function is , which is the total number of circuits with N genes ( ) multiplied by the maximum proportion of circuits per multifunction ( ) ., For a given number of functions k , this quotient will increase hyper-exponentially as N increases , indicating a dramatic increase in the maximum number of circuits per k-function ., More generally , because the fractional size of a k-functions genotype set can be approximated as the product of the fractional sizes of the genotype sets of its k constituent monofunctions ( Fig . S7 ) and because the total number of circuits increases exponentially with N , our observation that there are many circuits with k functions is expected to scale to larger circuits ., The next question we asked is whether there is a tradeoff between the robustness of a k-function and the number of functions k ., We found that the robustness of a k-function decreases as k increases ., However , some degree of robustness is generally maintained , so long as k is not too large ., These observations suggest that the number of circuit functions generally does not impose severe constraints on the evolution of circuit genotypes , unless the number of functions is very large ., Our current knowledge of biological circuits is too limited to allow us to count the number of functions per circuit ., However , we can ask whether the functional \u201cburden\u201d on biological circuits is very high ., If so , we would expect that the genes that form these circuits and their regulatory regions cannot tolerate genetic perturbations , and that they have thus accumulated few or no genetic changes in their evolutionary history ., However , this is not the case ., The biochemical activities and regulatory regions of circuit genes can diverge extensively without affecting circuit function 55 , 59 , 61 , 67 , and the very different circuit architectures of distantly related species can have identical function 24 , 28 ., Further , circuits are highly robust to the experimental perturbation of their architecture , such as the rewiring of regulatory interactions 20 ., More indirect evidence comes from the study of genes with multiple functions , identified through gene ontology annotations ., The rate of evolution of these genes is significantly but only weakly correlated with the number of known functions 68 ., Thus , the functional burden on biological genes and circuits is not sufficiently high to preclude evolutionary change ., Previous studies of monofunctional regulatory circuits have revealed broad distributions of circuit robustness to genetic perturbation 16 , 18 , 29 ., We therefore asked if this is also the case for multifunctional circuits ., We found that circuit robustness was indeed variable , but that the mean and variance of the distributions of circuit robustness decreased as the number of functions k increased ., Thus , variation in circuit robustness persists in multifunctional circuits , so long as k is not too large ., This provides further evidence that robustness to mutational change may be considered the rule , rather than the exception , in biological networks 1 , 18 , 20 , 29 ., However , to make the claim that robustness to genetic perturbation is an evolvable property in multifunctional regulatory circuits requires not only variability in circuit robustness , but also the ability to change one circuit into another via a series of mutations that do not affect any of the circuits functions ., We therefore asked whether it is possible to interconvert any two circuits with the same function via a series of function-preserving mutational changes ., We showed that this is always possible for monofunctions , but not necessarily for multifunctions , because these often comprise fragmented genotype sets ., Genotype set fragmentation has also been observed at lower levels of biological organization , such as the mapping from RNA sequence to secondary structure 69 ., Such fragmentation has two evolutionary implications , as has recently been discussed for RNA phenotypes 70 ., First , the mutational robustness of a phenotype ( function ) depends upon which genotype network its sequences inhabit , as we have also shown for regulatory circuits ( Fig . S11 ) ., Second , it can lead to historical contingency , where the phenotypic effects of future mutations depend upon the current genetic background ., Such contingency indeed occurs in our circuits , because the specific genotype network that a circuit ( genotype ) occupies may be influenced by the temporal order in which a circuits functions ( phenotypes ) have evolved ., This order in turn may affect a circuits ability to evolve new functions ., These observations hinge on the assumption that the space between two ( disconnected ) parts of a fragmented genotype set is not easily traversed ., For example , in RNA it is well known that pairs of so-called compensatory mutations can allow transitions between genotype networks 71 , thus alleviating the historical contingency caused by fragmentation ., To assess whether an analogous phenomenon might exist for regulatory circuits , we calculated the average distance between all pairs of genotypes on distinct genotype networks for circuits with the same k-function ., We found that this distance decreases as the number of functions k increases , indicating an increased proximity between genotype networks ( Fig . S12 ) ., However , those pairs of genotypes in any two different genotype networks that had the minimal distance of two mutations never exceeded 1% of all pairs of genotypes on these networks , and was as low as 0 . 03% for functions ( Fig . S12A , inset ) ., This means that transitions between genotype networks through few mutations are not usually possible in these model regulatory circuits ., Thus , the multiple genotype networks of a genotype set can indee","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Gene regulatory circuits drive the development , physiology , and behavior of organisms from bacteria to humans ., The phenotypes or functions of such circuits are embodied in the gene expression patterns they form ., Regulatory circuits are typically multifunctional , forming distinct gene expression patterns in different embryonic stages , tissues , or physiological states ., Any one circuit with a single function can be realized by many different regulatory genotypes ., Multifunctionality presumably constrains this number , but we do not know to what extent ., We here exhaustively characterize a genotype space harboring millions of model regulatory circuits and all their possible functions ., As a circuits number of functions increases , the number of genotypes with a given number of functions decreases exponentially but can remain very large for a modest number of functions ., However , the sets of circuits that can form any one set of functions becomes increasingly fragmented ., As a result , historical contingency becomes widespread in circuits with many functions ., Whether a circuit can acquire an additional function in the course of its evolution becomes increasingly dependent on the function it already has ., Circuits with many functions also become increasingly brittle and sensitive to mutation ., These observations are generic properties of a broad class of circuits and independent of any one circuit genotype or phenotype .","summary":"Many essential biological processes , ranging from embryonic patterning to circadian rhythms , are driven by gene regulatory circuits , which comprise small sets of genes that turn each other on or off to form a distinct pattern of gene expression ., Gene regulatory circuits often have multiple functions ., This means that they can form different gene expression patterns at different times or in different tissues ., We know little about multifunctional gene regulatory circuits ., For example , we do not know how multifunctionality constrains the evolution of such circuits , how many circuits exist that have a given number of functions , and whether tradeoffs exist between multifunctionality and the robustness of a circuit to mutation ., Because it is not currently possible to answer these questions experimentally , we use a computational model to exhaustively enumerate millions of regulatory circuits and all their possible functions , thereby providing the first comprehensive study of multifunctionality in model regulatory circuits ., Our results highlight limits of circuit designability that are relevant to both systems biologists and synthetic biologists .","keywords":"systems biology, computer science, genetics, biology, computational biology, computerized simulations, gene networks","toc":null} +{"Unnamed: 0":1313,"id":"journal.pcbi.1000445","year":2009,"title":"Topography of Extracellular Matrix Mediates Vascular Morphogenesis and Migration Speeds in Angiogenesis","sections":"The physical properties of the ECM , such as density , heterogeneity , and stiffness , that affect cell behavior is also an area of current investigation ., Matrigel , a popular gelatinous protein substrate for in vitro experiments of angiogenesis , is largely composed of collagen and laminin and contains growth factors , all of which provide an environment conducive to cell survival ., In experiments of endothelial cells on Matrigel , increasing the stiffness of the gel or disrupting the organization of the cellular cytoskeleton , inhibits the formation of vascular cell networks 10 , 11 ., Cells respond to alterations in the mechanical properties of the ECM , for example , by upregulating their focal adhesions on stiffer substrates 12 ., For anchorage-dependent cells , including endothelial cells , increasing the stiffness of the ECM therefore results in increased cell traction and slower migration speeds 12 ., Measurements of Matrigel stiffness as a function of density show a positive relationship between these two mechanical properties 13 ., That is , as density increases , so does matrix stiffness ., In light of these two findings , it is not surprising that this experimental study also shows slower cell migration speeds as matrix density increases 13 ., Moreover , matrices with higher fiber density transfer less strain to the cell 14 and experiments of endothelial cells cultured on collagen gels demonstrate that directional sprouting , called branching , is induced by collagen matrix tension 15 ., Thus , via integrin receptors , the mechanical properties of the ECM influence cell-matrix interactions and modulate cell shape , cell migration speed , and the formation of vascular networks ., Understanding how individual cells interpret biochemical and mechanical signals from the ECM is only a part of the whole picture ., Morphogenic processes also require multicellular coordination ., In addition to the guidance cues cells receive from the ECM , they also receive signals from each other ., During new vessel growth , cells adhere to each other through cell-cell junctions , called cadherins , and in order to migrate , cells must coordinate integrin mediated focal adhesions with these cell-cell bonds ., This process is referred to as collective or cluster migration 16 ., During collective migration , cell clusters often organize as two-dimensional sheets 16 ., Cells also have the ability to condition the ECM for invasion by producing proteolytic enzymes that degrade specific ECM proteins 17 ., In addition , cells can synthesize ECM components , such as collagen and fibronectin 11 , 18 , and can further reorganize the ECM by the forces they exert on it during migration 10 , 11 , 14 ., Collagen fibrils align in response to mechanical loading and cells reorient in the direction of the applied load 14 ., Tractional forces exerted by vascular endothelial cells on Matrigel cause cords or tracks of aligned fibers to form promoting cell elongation and motility 11 ., As more experimental data are amassed , the ECM is emerging as the vital component to morphogenic processes ., In this work , we extend our cellular model of angiogenesis 19 and validate it against empirical measurements of sprout extension speeds ., We then use our model to investigate the effect of ECM topography on vascular morphogenesis and focus on mechanisms controlling cell shape and orientation , sprout extension speeds , and sprout morphology ., We show the dependence of sprout extension speed and morphology on matrix density , fiber network connectedness , and fiber orientation ., Notably , we observe that varying matrix fiber density affects the likelihood of capillary sprout branching ., The model predicts an optimal density for capillary network formation and suggests matrix heterogeneity as a mechanism for sprout branching ., We also identify unique ranges of matrix density that promote sprout extension or that interrupt normal angiogenesis , and show that maximal sprout extension speeds are achieved within a density range similar to the density of collagen found in the cornea ., Finally , we quantify the effects of proteolytic matrix degradation by the tip cell on sprout velocity and demonstrate that degradation promotes sprout growth at high densities , but has an inhibitory effect at lower densities ., Based on these findings , we suggest and discuss several ECM targeted pro- and anti-angiogenesis therapies that can be tested empirically ., We previously published a cell-based model of tumor-induced angiogenesis that captures endothelial cell migration , growth , and division at the level of individual cells 19 ., That model also describes key cell-cell and cell-matrix interactions , including intercellular adhesion , cellular adhesion to matrix components , and chemotaxis to simulate the early events in new capillary sprout formation ., In the present study , we extend that model to incorporate additional mechanisms for cellular motility and sprout extension , and use vascular morphogenesis as a framework to study how ECM topography influences intercellular and cell-matrix interactions ., The model is two-dimensional ., It uses a lattice-based cellular Potts model describing individual cellular interactions coupled with a partial differential equation to describe the spatio-temporal dynamics of vascular endothelial growth factor ., At every time step , the discrete and continuous models feedback on each other , and describe the time evolution of the extravascular tissue space and the developing sprout ., The cellular Potts model evolves by the Metropolis algorithm: lattice updates are accepted probabilistically to reduce the total energy of the system in time ., The probability of accepting a lattice update is given bywhere is the change in total energy of the system as a result of the update , is the Boltzmann constant , and is the effective temperature corresponding to the amplitude of cell membrane fluctuations ., A higher temperature corresponds to larger cell membrane fluctuation amplitudes ., The energy , , includes a term describing cell-cell and cell-matrix adhesion , a constraint controlling cellular growth , an effective chemotaxis potential , and a continuity constraint ., Mathematically , total energy is given by: ( 1 ) In the first term of Eq ., 1 , represents the binding energy between model constituents ., For example , describes the relative strength of cell-cell adhesion that occurs via transmembrane cadherin proteins ., Similarly , is a measure of the binding affinity between an endothelial cell and a matrix fiber through cell surface integrin receptors ., Each endothelial cell is associated with a unique identifying number , ., is the Kronecker delta function and ensures that adhesive energy only accrues at cell surfaces ., The second term in Eq ., 1 describes the energy expenditure required for cell growth and deformation ., Membrane elasticity is described by , denotes cell current volume , and is a specified target volume ., For proliferating cells , the target volume is double the initial volume ., This growth constraint delivers a penalty to total energy for any deviation from the target volume ., In the third term , the parameter is the effective chemical potential and influences the strength of chemotaxis relative to other parameters in the model ., This chemotaxis potential varies depending on cell phenotype ( discussed below ) and is proportional to the local VEGF gradient , , where denotes the concentration of VEGF ., Cells must simultaneously integrate multiple external stimuli , namely intercellular adhesion , chemotactic incentives , and adherence to extracellular matrix fibers ., To do so , endothelial cells deform their shape and dynamically regulate adhesive bonds ., In the model , however , it is possible that collectively these external stimuli may cause a cell to be pulled or split in two ., To prevent non-biological fragmentation of cells , we introduce a continuity constraint that preserves the physical integrity of each individual cell ., This constraint expresses that it is energetically expensive to compromise the physical integrity of a cell and is incorporated into the equation for total energy ( Eq . 1 ) in the last term , where is a continuity constraint that represents the effects of the cytoskeletal matrix of a cell ., is the current size of the endothelial cell with identifying number , and is a breadth first search count of the number of continuous lattice sites occupied by that endothelial cell ., Thus , signals that the physical integrity of the cell has been compromised and a penalty to total energy is incurred ., Cooperatively , the continuity constraint and the volume constraint implicitly describe the interactions holding the cell together ., The amount of VEGF available at the right hand boundary of the domain is estimated by assuming that in response to a hypoxic environment , quiescent tumor cells secrete a constant amount of VEGF and that VEGF decays at a constant rate ., It is reasonable to assume that the concentration of VEGF within the tumor has reached a steady state and therefore that a constant amount of VEGF , denoted , is available at the boundary of the tumor ., We use constant boundary conditions for the left ( ) and right boundaries and periodic boundary conditions in the y-direction ., A gradient of VEGF is established as VEGF diffuses through the stroma with constant diffusivity coefficient , decays at a constant rate , and is bound by endothelial cells , ., A complete description of the biochemical derivation of the function for endothelial cell binding and uptake of VEGF ( ) has been previously published 19 ., For more direct comparison to other mathematical models of angiogenesis models and to isolate the effects of ECM topology on vessel morphology , we assume that the diffusion coefficient for VEGF in tissue is constant ., This is a simplification , however , because the ECM is not homogeneous and VEGF can be bound to and stored in the ECM ., Realistically , the diffusion coefficient ( ) for VEGF in the ECM depends on both space and time ., We address the implications of this assumption in the Discussion ., Under these assumptions , the concentration profile of VEGF satisfies a partial differential equation of the form: ( 2 ) The inset in Figure 1A provides an illustration of the 166 \u00b5m\u00d7106 \u00b5m domain geometry ., We initialize the simulation by establishing the steady state solution to Eq ., 2 ., The activation and aggregation of endothelial cells , and subsequent breakdown of basement membrane in response to VEGF 20 is a pre-condition ( boundary condition ) to the simulation ., The breakdown of basement membrane allows endothelial cells to enter the extravascular space through a new vessel opening ., Our simulation starts with a single activated endothelial cell \u223c10 \u00b5m in diameter that has budded from the parent vessel located adjacent to the left hand boundary 20 ., We use 10 \u00b5m only as an initial estimate of endothelial cell size 21 , 22 ., Once the simulation begins , the cells immediately deform in shape and elongate ., During the simulation , the VEGF field is updated iteratively with cell uptake information , for example as shown in Figure 1B , C ., VEGF data is processed by the cells at the cell membrane and incorporated into the model through the chemotaxis term in Eq ., 1 ., From the parent blood vessel , endothelial cells ( red ) migrate into the domain in response to VEGF that is supplied from a tumor located adjacent to the right hand boundary ., The space between represents the stroma and is composed of extracellular matrix fibers ( green ) and interstitial fluid ( blue ) ., The physical meanings of all symbols and their parameter values are summarized in Table 1 ., To more accurately capture the cell-cell and cell-matrix interactions that occur during morphogenesis , we implement several additional features to this model ., One improvement is the implementation of stalk cell chemotaxis ., Stalk cells are not inert , but actively respond to chemotactic signals 23 ., As a consequence , cells now migrate as a collective body , a phenomenon called collective or cohort migration 24 ., This modification , however , also makes it possible for individual cells , as well as the entire sprout body , to migrate away from the parent vessel , making it necessary to consider cell recruitment from the parent vessel ., Cell recruitment is another added feature ., During the early stages of angiogenesis , cells are recruited from the parent vessel to facilitate sprout extension 20 , 25 ., Kearney et al . 26 measured the number and location of cell divisions that occur over 3 . 6 hours in in vitro vessels 8 days old ( a detailed description of these experiments is provided in our discussion of model validation ) ., In these experiments , the sprout field is defined as the area of the parent vessel wall that ultimately gives rise to the new sprout and the sprout itself ., The sprout field is further broken down into regions based on distance from the parent vessel and these regions are classified as distal , proximal , and nascent ., The authors report that 90% of all cell divisions occur in the parent vessel and the remaining 10% occur at or near the base of the sprout in the nascent area of the sprout field ., On average , total proliferation accounts for approximately 5 new cells in 3 . 6 hours , or 20 cells in 14 hours ., This data suggests that there is significant and sufficient proliferation in the primary vessel to account for and facilitate initial sprout extension ., This data does not suggest that proliferation in other areas of the sprout field does not occur at other times ., In fact , it has been established that a new sprout can migrate only a finite distance into the stroma without proliferation and that proliferation is necessary for continued sprout extension 25 ., We model sprout extension through a cell-cell adhesion dependent recruitment of additional endothelial cells from the parent vessel ., As an endothelial cell at the base of the sprout moves into the stroma , cell-cell adhesion pulls a cell from the parent vessel along with it ., In practice , a new cell is added to the base of the sprout when and where the previous cell detaches from the parent vessel wall ( left boundary of the simulation domain ) ., We assume , based on the data presented in 26 , that there is sufficient proliferation in the parent vessel to provide the additional cells required for initial sprout extension while maintaining the physical integrity of the parent vessel ., As in our previous model , once a cell senses a threshold concentration of VEGF , given by , it becomes activated ., We recognize that cells have distinct phenotypes that dictate their predominate behavior ., Thus , we distinguish between tip cells , cells that are proliferating , and non-proliferating but migrating stalk cells ., Tip cells are functionally specialized cells that concentrate their internal cellular machinery to promote motility 23 ., Tip cells are highly migratory pathfinding cells and do not proliferate 25 , 26 ., To model the highly motile nature of the tip cell , we assign it the highest chemotactic coefficient , ., The remainder of the cells are designated as stalk cells and use adhesive binding to and release from the matrix fibers for support and to facilitate cohort migration ., Stalk cells also sense chemical gradients but are not highly motile phenotypes ., Thus , the stalk cells in the model are assigned a lower , that is weaker , chemotactic coefficient than the specialized tip cell ., Proliferating cells are located behind the sprout tip 23 , 26 and increase in size as they move through an 18 hour cell cycle clock in preparation for cell division 27 ., Cells that are proliferating can still migrate 26; it is only during the final stage of the cell cycle that endothelial cells stop moving and round up for mitosis ( personal communication with C . Little ) ., As we assume that the presence of VEGF increases cell survivability , we do not model endothelial cell apoptosis ., As described in our previous work 19 , we model the mesh-like anisotropic structure of the extracellular matrix by randomly distributing 1 . 1 \u00b5m thick bundles of individual collagen fibrils at random discrete orientations between \u221290 and 90 degrees ., Unless otherwise stated , model matrix fibers comprise approximately 40% of the total stroma and the distribution of the ECM is heterogeneous , with regions of varying densities as can be seen in Figure 1A and Figure 7D ., The cells move on top of the 2D ECM model and interact with the matrix fibers at the cell membrane through the adhesion term in Eq ., 1 ., To relate the density ( ) of this model fibrillar matrix to physiological values , we measure matrix fiber density as the ratio of the interstitium occupied by matrix molecules to total tissue space , , and compare it to measured values of the volume fraction of collagen fibers in healthy tissues 28 ., In order to isolate and control the effects of the matrix topology on cellular behavior and sprout morphology we look at a static ECM , that is we do not model ECM rearrangement or dynamic matrix fiber cross-linking and stiffness ., We do , however , consider endothelial cell matrix degradation in a series of studies presented in Results ., No single model has been proposed that incorporates every aspect of all processes involved in sprouting angiogenesis , nor is this level of complexity necessary for a model to be useful or predictive ., It is not our intention to include every bio-chemical or mechanical dynamic at play during angiogenesis ., We develop this two-dimensional cell-based model as a step towards elucidating cellular level dynamics fundamental to angiogenesis , including cell growth and migration , and cell-cell and cell-matrix interactions ., Consequently , we do not incorporate processes or dynamics at the intracellular level ., For example , we describe endothelial cell binding of VEGF to determine cell activation and to capture local variations in VEGF gradients , but neglect intracellular molecular pathways signaled downstream of the receptor-ligand complex ., Moreover , our focus is on early angiogenic events and therefore we also do not consider the effects of blood flow on remodeling of mature vascular beds ., Numerical studies of flow-induced vascular remodeling have been given attention in McDougall et al . 29 , and Pries and Secomb 30 , 31 ., As is the case in many other simulations of biological systems , when we do not have direct experimental measurements for all of the parameters , choosing these parameter values is not trivial ., A list of values and references for our model parameters is provided in Table 1 ., A parameter is derived from experimental data whenever possible , otherwise it is estimated and denoted \u2018est\u2019 ., Fortunately , a sensitivity analysis ( discussed later ) shows that the dynamics of our model are quite robust to substantial variations in some parameters and tells us exactly which parameters are most critical ., We can then choose from a range of parameter values that exhibits the general class of behavior consistent with experimental observations ., See Table 1 for these parameter ranges and Table 3 for the effect of parameter perturbations , as well as , supplemental Figures S1 and S2 for examples of cellular behavior under different parameter sets ., In the cellular Potts model , the relative value , not the absolute value , of the parameters corresponds to available physiological measurements and gives rise to a cell behavior observed experimentally ., For example , the Youngs modulus for human vascular endothelial cells is estimated at 2 . 01*105 Pa 32 ., The Youngs modulus of a collagen fiber in aqueous conditions is between 0 . 2\u20130 . 8 GPa 33 ., However , the modulus of a collagen gel network is much lower and is measured at 7 . 5 Pa 34 ., Although interstitial fluid compressibility ( water ) is estimated to be 2 . 2 GPa 35 , indicating its hard to compress under uniform pressure , it deforms easily , that is , the shear modulus is low and is measured at 10\u22126 Pa 36 ., The qualitative parameters corresponding to these quantitative measurements are where ., Thus , the elastic modulus of endothelial cells>matrix fibers>interstitial fluid ( 0 . 2 MPa>7 . 5 Pa>10\u22126 Pa ) and is reflected in the relative values of the corresponding parameters , , and ., In a similar manner , the coupling parameters , , describe the relative adhesion strengths among endothelial cells , matrix fibers , and interstitial fluid ., For instance , choosing reflects that fact that endothelial cells have a higher binding affinity to each other , via cadherin receptors and gap junctions for example , than they do to matrix fibers 37 , 38 ., The chemotactic potential , , is chosen so that its contribution to the change in total energy is the same order of magnitude as the contribution to total energy from adhesion or growth ., The difference between the concentration of VEGF at two adjacent lattice sites is on the order of 10\u22124 ., Therefore , to balance adhesion and growth , must be on the order of 106 ., We calibrate this parameter to maximize sprout extension speeds ., Similarly , the parameter for continuity , , is chosen so that cells will not dissociate ., This is achieved by setting greater than the collective contribution to total energy from the other terms ., By equating the time it takes an endothelial cell to divide during the simulation with the endothelial cell cycle duration of 18 hours , we convert Monte Carlo steps to real time units ., In the simulations reported in this paper , 1 Monte Carlo step is equivalent to 1 minute ., Since this model has several enhancements over the previous model 19 , there are a different number of parameters , which necessitates recalibration of all the parameters ., Therefore , some parameters take on different values ., The canonical benchmark for validating models of tumor-induced angiogenesis is the rabbit cornea assay 39 , 40 ., In this in vivo experimental model , tumor implants are placed in a corneal pocket approximately 1\u20132 mm from the limbus ., New vessel growth is measured with an ocular micrometer at 10\u00d7 , which has a measurement error of \u00b10 . 1 mm or 100 \u00b5m ., Initially , growth is linear and sprout extension speeds are estimated at a rate of 0 . 5 mm\/day , or 20 . 8\u00b14 . 2 \u00b5m\/hr ., Sprouts then progress at average speeds estimated to be between 0 . 25\u20130 . 50 mm\/day , or 10 . 4\u201320 . 8\u00b14 . 2 \u00b5m\/hr ., More recent measurements of sprout extension speeds during angiogenesis are reported in Kearney et al . 26 ., In this study , embryonic stem cells containing an enhanced green fluorescent protein are differentiated in vitro to form primitive vessels ., Day 8 cell cultures are imaged within an \u223c160 \u00b5m2 area at 1 minute intervals for 10 hours and show sprouting angiogenesis over this period ., The average extension speed for newly formed sprouts is 14 \u00b5m\/hr and ranges from 5 to 27 \u00b5m\/hr ., For cell survival , growth factor is present and is qualitatively characterized as providing a diffuse , or shallow , gradient ., No quantitative data pertaining to growth factor gradients or the effect of chemotaxis during vessel growth are reported 26 ., We use the above experimental models and reported extension speeds as a close approximation to our model of in vivo angiogenesis for quantitative comparison and validation ., We simulate new sprout formation originating from a parent vessel in the presence of a diffusible VEGF field , which creates a shallow VEGF gradient ., We measure average extension speeds over a 14 hour period in a domain 100 \u00b5m by 160 \u00b5m ., As was done in Kearney et al . 26 , we calculate average sprout velocities as total sprout tip displacement in time and measure this displacement as the distance from the base of the new sprout to the sprout tip ., Figure 1A shows average sprout extension speed over time for our simulated sprouts ., Reported speeds are an average of at least 10 independent simulations using the same initial VEGF profile and parameter set as given in Table 1 ., Error bars represent the standard error from the mean ., The average extension speeds of our simulated sprouts are within the ranges of average sprout speeds measured by both Kearney et al . 26 and Gimbrone et al . 39 ., Table 2 summarizes various morphological measurements for the simulated sprouts ., It shows that the average velocity , thickness , and cell size of the simulated sprouts compare favorably to relevant experimental measurements ., Sprout velocity is given at 10 hours for direct comparison to 26 and averaged over 14 hours ., Sprout thicknesses and cell size are within normal physiological ranges ., There are many different cell shapes and sizes and vessel morphologies , however , that can be obtained in vivo and in vitro given different environmental factors ( VEGF profile , ECM topology and stiffness , inhibitory factors , other cell types , etc . ) ., In this manuscript , we investigate several of these dependencies and as we discuss below specific model parameters can be tuned to reproduce different cellular interactions and environments ., Figure 1A indicates that average sprout extension speed changes as a function of time ., Within the first two hours , speeds average \u223c30 \u00b5m\/hr and the new sprout consists of only 1\u20132 endothelial cells ., At two hours , sprouts contain an average of 3 cells , and at 4 hours , there are a total of 5\u20136 cells ., Over time , as more cells are added to the developing sprout , cell-cell adhesion and cellular adhesion to the extracelluar matrix slow the sprout extension speed ., The inset in Figure 1A shows the geometry of the computational domain and simulated sprout development at 7 . 8 hours ., As shown , simulated sprouts are approximately one cell diameter wide , which compares quantitatively well to reported VEGF induced vessel diameters 41 , 42 ., Here and in all simulation snapshots , tip cells are identified with a \u2018T\u2019 ., In moving multicellular clusters , rear retraction is a collective process that involves many cells simultaneously 16 ., A natural result of the cell-based model is that cells exhibit rear retraction , which refers to the ability of a cell to release its trailing adhesive bonds with the extracellular matrix during migration ., Collective migration , another characteristic dynamic observed during sprout growth , is also evident during the simulations ( see videos ) ., The VEGF concentration profile in picograms ( pg ) at 7 . 8 hours is given in Figure 1B ., Higher concentrations of VEGF are encountered as the cells approach the tumor ., However , because cell uptake of VEGF is small compared to the amount of available VEGF , it is difficult to discern the heterogeneities in the VEGF profile from this figure ., Figure 1C is the VEGF gradient profile ( pg ) at 7 . 8 hours and is a better indicator of the changes in local VEGF concentration ., This image shows larger gradients in the proximity of the tip cell and along the leading edges of the new sprout ., On average , simulated sprouts migrate 160 \u00b5m and reach the domain boundary in approximately 15 . 6 hours , before any cells in the sprout complete their cell cycle and proliferate ., We do not expect to see proliferation in the new sprout because the simulation duration is less than the 18 hour cell cycle and the cell cycle clock is set to zero for newly recruited cells to simulate the very onset of angiogenesis ., In our simulations , sprout extension is facilitated by cell recruitment from the parent vessel ., Between 15 and 20 cells are typically recruited , which agrees with the number of cells we estimate would be available for recruitment based on parent vessel cell proliferation reported by Kearney et al . 26 ., In those experiments 26 , proliferation in the parent vessel was measured for day 8 sprouts , which likely has cells at various stages in their cell cycles ., Proliferation in the new sprout is another mechanism for sprout extension ., Thus , we consider the possibility that cells recruited from the parent vessel may be in different stages of their cell cycles by initializing the cell cycle clock of each recruited cell at randomly generated times ., We observe no differences in extension speeds , sprout morphology , or the number of cells recruited as a result of the assumption we make for cell cycle initialization ( or random ) ., This suggests that , in the model , stalk cell proliferation and cell recruitment from the parent vessel are complementary mechanisms for sprout extension ., By adjusting key model parameters , we are able to simulate various morphogenic phenomena ., For example , by increasing the chemotactic sensitivity of cells in the sprout stalk and decreasing the parameter controlling cellular adhesion to the matrix , , we are able to capture stalk cell migration and translocation along the side of a developing sprout ( Video S1 ) ., This phenomena , where stalk cells weaken their adhesive bonds to the extracellular matrix and instead use cell-cell adhesion to facilitate rapid migration , frequently occurs in embryogenesis ( personal communication with C . Little ) and is described as preferential migration to stretched cells 43 ., Compare Video S1 with Figure 1, ( f ) in Szabo et al . 2007 43 ., Figure S1 shows the morphology for one particular set of parameter values corresponding to weaker cell-cell and cell-matrix adhesion and stronger chemotaxis ., In this simulation , cells elongate to approximately 40 \u00b5m in length , fewer cells are recruited from the parent vessel , and the average extension speed at 14 hours slows to 6 . 8 \u00b5m\/hr ., The length scale is consistent with experimental measurements of endothelial cell elongation 23 , 44 ., Figure 5 from Oakley et al . 1997 shows images from experiments using human fibroblasts stained for actin, ( e ) and tubulin, ( f ) on micro-machined grooved substratum 45 ., These experiments demonstrate that cells alter their shape , orientation , and polarity to align with the direction of the grooves ( double-headed arrow ) , exhibiting topographic , or contact , guidance ., Figure S2 is a simulation designed to mimic these experiments by isolating the cellular response to topographical guidance on similarly patterned substratum ., In this simulation , there is no chemotaxis and no cell-cell contact; cells respond only to topographical cues in the extracellular matrix ., Simulated cells alter their shape and orient in the direction of the matrix fibers ., Figure S2 bears a striking resemblance to the cell shapes observed in 45 ., We are also able to simulate interstitial invasion\/migration by a single cell by turning off proliferation and cell recruitment but leaving all other parameters unchanged ( Video S2 ) ., This simulation is especially relevant in the context of fibroblast recruitment during wound healing and tumor cell invasion ( e . g . , glioblastoma , the most malignant form of brain cancer 46 ) , where understanding cell-matrix interactions and directed motility are critical mechanisms for highly motile or invasive cell phenotypes ., We design a set of numerical experiments allowing us to observe the onset of angiogenesis in extravascular environments of varying matrix fiber density ., We consider matrix fiber densities given as a fraction of the total interstitial area , ., As a measure of matrix orientation equivalency , the total fiber orientation in both the x and the y direction is calculated as we increased the matrix density ., The total x and total y fiber orientation do not vary with changes in total matrix density ., Besides varying the matrix density , all other parameters are held fixed ., All simulations last the same duration corresponding to approximately 14 hours ., The average rate at which the sprout grows and migrates , or its average extension sp","headings":"Introduction, Methods, Results, Discussion","abstract":"The extracellular matrix plays a critical role in orchestrating the events necessary for wound healing , muscle repair , morphogenesis , new blood vessel growth , and cancer invasion ., In this study , we investigate the influence of extracellular matrix topography on the coordination of multi-cellular interactions in the context of angiogenesis ., To do this , we validate our spatio-temporal mathematical model of angiogenesis against empirical data , and within this framework , we vary the density of the matrix fibers to simulate different tissue environments and to explore the possibility of manipulating the extracellular matrix to achieve pro- and anti-angiogenic effects ., The model predicts specific ranges of matrix fiber densities that maximize sprout extension speed , induce branching , or interrupt normal angiogenesis , which are independently confirmed by experiment ., We then explore matrix fiber alignment as a key factor contributing to peak sprout velocities and in mediating cell shape and orientation ., We also quantify the effects of proteolytic matrix degradation by the tip cell on sprout velocity and demonstrate that degradation promotes sprout growth at high matrix densities , but has an inhibitory effect at lower densities ., Our results are discussed in the context of ECM targeted pro- and anti-angiogenic therapies that can be tested empirically .","summary":"A cell migrating in the extracellular matrix environment has to pull on the matrix fibers to move ., When the matrix is too dense , the cell secretes enzymes to degrade the matrix proteins in order to get through ., And when the matrix is too sparse , the cell produces matrix proteins to locally increase the \u201cfoothold\u201d ., How cells interact with the extracellular matrix is important in many processes from wound healing to cancer invasion ., We use a computational model to investigate the topography of the matrix on cell migration and coordination in the context of tumor induced new blood vessel growth ., The model shows that the density of the matrix fibers can have a strong effect on the extension speed and the morphology of a new blood vessel ., Further results show that matrix degradation by the cells can enhance vessel sprout extension at high matrix density , but impede sprout extension at low matrix density ., These results can potentially point to new targets for pro- and anti-angiogenesis therapies .","keywords":"cardiovascular disorders\/vascular biology, developmental biology\/morphogenesis and cell biology, cell biology\/morphogenesis and cell biology, cell biology\/cell growth and division, mathematics, cell biology\/cell adhesion, computational biology\/systems biology","toc":null} +{"Unnamed: 0":674,"id":"journal.pcbi.1004970","year":2016,"title":"Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space","sections":"The animal brain is the most complex information processing system in living organisms ., To elucidate how real nervous systems perform computations is one of the fundamental goals of neuroscience and systems biology ., The wiring information for neural circuits and visualization of their activity at cellular resolution are required for achieving this goal ., Advances in microscopy techniques in recent years have enabled whole-brain activity imaging of small animals at cellular resolution 1\u20134 ., The wiring information of all the neurons in the mouse brain can be obtained using recently developed brain-transparentization techniques 5\u20139 ., Detection of neurons from microscopy images is necessary for optical measurements of neuronal activity or for obtaining wiring information ., Because there are many neurons in the images , methods of automatic neuron detection , rather than manual selection of ROIs ( regions of interest ) , are required and several such methods have been proposed 10 , 11 ., Detection of cells that are distributed in three-dimensional ( 3D ) space is also important in other fields of biology such as embryonic development studies 12\u201317 ., In these methods , cell nuclei are often labeled by fluorescent probes and used as a marker of a cell ., To identify nuclei in such images , the basic method is blob detection , which for example consists of local peak detection followed by watershed segmentation ., If the cells are sparsely distributed , blob detection methods are powerful techniques for nucleus detection ., However , if two or more cells are close to each other , the blobs are fused , and some cells will be overlooked ., These false negatives may be trivial for the statistics of the cells but may strongly affect individual measurements such as those of neuronal activity ., Overlooking some nuclei should be avoided when subsequent analyses assume that all the cells were detected , for example , when making wiring diagram of neurons or establishing a cell lineage in embryonic development ., Therefore , correct detection of all nuclei from images without false negatives is a fundamental problem in the field of bio-image informatics ., Although many efforts have been made to develop methods that avoid such false negatives , these methods seem to insufficiently overcome the problem ., In the head region of Caenorhabditis elegans , for example , the neuronal nuclei are densely packed and existing methods produce many false negatives , as shown below ., Actually , in the studies of whole-brain activity imaging of C . elegans reported so far , the local peak detection method that can overlook many nuclei was employed 3 , 18 , or the nuclei were manually detected 19 , 20 ., Highly accurate automatic nucleus detection methods should be developed in order to improve the efficiency and accuracy of such image analysis ., Here we propose a highly accurate automatic nucleus detection method for densely distributed cell nuclei in 3D space ., The proposed method is based on newly developed clump splitting method suitable for 3D images and improves the detection of all nuclei in 3D images of neurons of nematodes ., A combination of this approach with a Gaussian mixture fitting algorithm yields highly accurate locations of densely packed nuclei and enables automatic tracking and measuring of these nuclei ., The performance of the proposed method is demonstrated by using various images of densely-packed head neurons of nematodes which was obtained by various types of microscopes ., In this study , we focused on the head neurons of the soil nematode C . elegans , which constitute the major neuronal ensemble of this animal 21 ., All the neuronal nuclei in a worm of strain JN2100 were visualized by the red fluorescent protein mCherry ., The head region of the worm was imaged by a confocal microscope , and we obtained 3D images of 12 animals ( Data 1 , Fig 1A ) ., The shape of the nuclei was roughly ellipsoidal ( Fig 1B ) ., The fluorescence intensity increased toward the centers of the nuclei ( Fig 1D ) ., The typical half-radius of the nuclei was about 1 . 10 \u03bcm ( S1 Fig ) ., The distance to the nearest neighboring nucleus was 4 . 30 \u00b1 2 . 13 \u03bcm ( mean and standard deviation , S1 Fig ) , suggesting that the neurons are densely distributed in 3D space ., The mean fluorescence intensities differed among neurons by one order of magnitude ( S1 Fig ) , making it difficult to detect a darker nucleus near a bright nucleus ., We first applied conventional blob detection techniques to the 3D image ( Fig 1C\u20131E ) ., Salt-and-pepper noise and background intensities were removed from the image ., The image was smoothed to avoid over-segmentation ( Fig 1C and 1D ) ., Local intensity peaks in the preprocessed image were detected and used as seeds for 3D seeded grayscale watershed segmentation ., Each segmented region was regarded as a nucleus ( Fig 1E ) ., We found that dark nuclei in high-density regions often escaped detection ., If the dark nucleus was adjacent to a bright nucleus , the fluorescence of the bright nucleus overlapped that of the dark one , and the local intensity peak in the dark nucleus was masked ( Fig 1D ) ., As a result , the seed for the dark nucleus was lost , and the dark nucleus fused with the bright nucleus ( Fig 1E ) ., The rate of false-negative nuclei was 18 . 9% ., In contrast , our proposed method successfully detected and segmented the dark nuclei ( Fig 1F ) ., The shapes of the nuclei are roughly ellipsoidal , and the fluorescence intensity increased toward the centers of the nuclei , suggesting that the intensity of nuclei can be approximated by a mixture of trivariate Gaussian distributions ., The intensities fk of the k-th Gaussian distribution gk at voxel position, x\u2208R3, can be written as, fk ( x ) =\u03c0kgk ( x|\u03bck , \u03a3k ) =\u03c0kexp\\u2061 ( \u221212 ( x\u2212\u03bck ) T\u03a3k\u22121 ( x\u2212\u03bck ) ) ,, where \u03bck and \u03a3k are the mean vector and covariance matrix of gk , respectively , and \u03c0k is an intensity scaling factor ., To explain the effect on the curvature , typical bright and dark nuclei were approximated by the Gaussian distribution and are shown in Fig 2 as iso-intensity contour lines ( Fig 2A , 2C and 2E ) and plots of the intensity along the cross section ( Fig 2B , 2D and 2F ) ., When a bright nucleus was near a dark nucleus , the peak intensity of the dark nucleus merged with the tail of the fluorescence intensity distribution of the bright nucleus and no longer formed a peak ., These false negatives can be avoided by using methods for dividing a close pair of objects , or clump splitting ., Such methods have been developed for correct detection of objects in two-dimensional ( 2D ) images 22\u201326 ., These methods focus on the concavity of the outline of a blob ., The concavity was calculated based on one of or a combination of various measurements such as angle 25 , area 27 , curvature 26 , and distance measurements 24 of the outline ., In these methods , after binarization of the image , concavity was obtained for each point on the outline ., Then the concave points were determined as the local peaks of the concavity ., After determination of concave points , a line connecting a pair of concave points is regarded as the boundary between the objects ., When we regard the outermost contour line in Fig 2E as the outline of the fused blob ( Fig 2G ) , the conventional 2D clump splitting method can be easily applied and two concave points are detected from the fused blob ( Fig 2G , red circles ) ., The blob was divided into two parts by a border line connecting the two points , and the dark nucleus was detected ., In the ideal case in Fig 2E , we obtained necessary and sufficient number of concave points ., In real images , however , we might obtain too many concave points because outlines often contain noise and are not smooth ., However , the number of concave points to choose is unknown because it is hard to know how many nuclei are included in a blob in a real image ., Further , it is not obvious how to find the correct combinations of concave points to be linked if a blob contains three or more objects ., In addition , for 3D images , the concepts of border lines that connect two concave points cannot be naturally expanded to three dimensions , because now we need some extra processes such as connecting groups of concave points in order to form border surfaces ., Even if we regard a 3D image as a stack of 2D images , it is hard to split objects fused in the z direction ( direction of the stacks ) 11 , 27 ., Here we introduce a concept of areas of concavity instead of concave points ( i . e . local peak of concavity ) ., Hereafter we use curvature as a measure of concavity and focus on areas of negative curvature for simplicity and clarity , but other measures such as angle , area , and distance from convex hull may be applicable ., Furthermore we used the iso-intensity contour lines inside the object in addition to the outline of the object ., Near the concave points in Fig 2E , the iso-intensity contour lines have negative curvature; i . e . , they curve in the direction of low intensities ., Negative curvature may be a landmark of the border line because a single Gaussian distribution has positive curvature everywhere ., Actually , the voxels at which an iso-intensity contour line has negative curvature were between two Gaussian distributions ( Fig 2E , area between the broken lines ) ., Once these voxels are removed from the blob , detection of two nuclei should be straightforward ., This approach is different from the classic clump splitting methods in two respects; focusing on area rather than local peak of concavity ( concave points ) , and using iso-intensity contour lines in addition to the outline ., These differences eliminate the need for determining how many concave points should be chosen and for obtaining correct combinations of the concave points because the area of negative curvature will cover the border lines ., Therefore we can use the approach even if a blob contains three or more objects ., In addition , this approach is robust to noise because it does not depend on a single contour line ., Furthermore , this approach can be expanded to 3D images naturally because the 3D area ( i . e . voxels ) of negative curvatures will cover the border surfaces of the 3D objects ., Iso-intensity contour lines in 2D images are parts of iso-intensity contour surfaces in three dimensions ., A point on an iso-intensity surface has two principal curvatures , which can be calculated from the intensities of surrounding voxels ( S2 Text ) 28 ., The smaller of the two principal curvatures is positive at any point in a single Gaussian distribution but is negative around the border of two Gaussian distributions ., Therefore , once voxels that have negative curvature are removed from the blob , two or more nuclei should be detected easily in 3D images ., Thus our approach solves the above problems of the classic clump splitting methods ., We applied the above approach to real 3D images ( Fig 3 ) ., The original images were processed by denoising , background removal , and smoothing to obtain the preprocessed images ., The peak detection algorithm could find only a peak from the bright nucleus , and the blob obtained by watershed segmentation contained both nuclei ., The principal curvatures of the iso-intensity surface were calculated from the preprocessed image ., There were voxels of negative curvature in the area between two nuclei , but the area did not divide the two nuclei completely ., The voxels of negative curvature were removed from the blob , and the blob was distance-transformed; these procedures were followed by 3D watershed segmentation ., Thus , the two nuclei were separated , and the dark nucleus was successfully detected ., After voxels of negative curvature were removed from the blobs , the size of blobs obtained by the second watershed segmentation tended to be smaller than real nuclei , and the distances between the blobs tended to be larger ., To obtain the precise positions and sizes of the nuclei , least squares fitting with a Gaussian mixture was applied to the entire 3D image using a newly developed method ( see Methods ) ., The number of Gaussian distributions and the initial values of the centers of the distributions were derived from the above results ., Repeated application of watershed segmentation may increase over-segmentation ., If the distance between two fitted Gaussian distributions is too small , the two distributions may represent the same nuclei ., In this case , one of the two distributions was removed to avoid over-segmentation , and the fitting procedure was repeated with a single Gaussian distribution ., The proposed method detected 194 out of 198 nuclei in the 3D image ( Fig 4 ) ., Among the four overlooked nuclei , the intensities of two of them were too low to be detected ., The other two had moderate intensities but were adjacent to brighter nuclei ., In these cases , curvature-based clump splitting successfully split the two nuclei ., However , deviations of the brighter nuclei from Gaussian distributions disrupted the fitting of the Gaussian distributions and resulted in misplacement of the Gaussian distributions for the darker nuclei , which were instead fitted to the brighter nuclei ., On the other hand , the proposed method returned 11 false positives ., Two of them resulted from the misplacement of the Gaussian distribution for the darker nuclei described above ., Four of them were not neuronal nuclei but were fluorescence foci intrinsic to the gut ., Three of them were the result of over-segmentation of complex-shaped non-neuronal nuclei ., One of them was mislocalized fluorescence in the cytosol ., The last one was the result of over-segmentation of a large nucleus that was fitted with two Gaussian distributions separated by a distance larger than the cutoff distance ., We compared the performance of the proposed method with five previously published methods for nucleus segmentation ( Fig 5 and Table 1 ) ., Ilastik 29 is based on machine learning techniques and uses image features such as Laplacian of Gaussian ., FARSight 30 is based on graph cut techniques ., RPHC 1 was designed for multi-object tracking problems such as whole-brain activity imaging of C . elegans and uses a numerical optimization-based peak detection technique for object detection ., 3D watershed plugin in ImageJ 31 consists of local peak detection and seeded watershed ., This method is almost the same as the conventional blob detection method used in our proposed method ., CellSegmentation3D 32 uses gradient flow tracking techniques and was developed for clump splitting ., This method has been used in the study of automated nucleus detection and annotation in 3D images of adult C . elegans 33 ., We applied these six methods to 12 animals in Data 1 ( Fig 5 ) and obtained the performance indices ( Table 1 , see Methods ) ., The parameters of each method were optimized for the dataset ., The 3D images in the dataset contains 190 . 92 nuclei on average , based on manual counting ., The proposed method found 96 . 9% of the nuclei and the false negative rate was 3 . 1% , whereas the false negative rate of the other methods were 11 . 2% or more ., The false positive rate of the proposed method was 4 . 9% and that of the other methods ranged from 2 . 1% to 21 . 2% ., The proposed method shows the best performance with both of the well-established indices , F-measure 12 and Accuracy 34 , because of the very low false negative rate and modest false positive rate ., It should be noted that all of the compared methods overlooked more than 10% of nuclei in our dataset ., The reason for this was suggested by the segmentation results , in which almost all of these methods failed to detect the dark nuclei near the bright nuclei and fused them ( Fig 5 , right column ) ., These results suggest that all the compared methods have difficulty in handling 3D images with either large variance of object intensity or dense packing of objects , or both ( S1 Fig ) ., These results clearly indicate that our proposed method detects densely distributed cell nuclei in 3D space with highest accuracy ., Very low false negative rate is the most significant improvement of the proposed method from the other methods , suggesting that the proposed method will improve efficiency and accuracy of image analysis steps drastically ., Because none of the computational image analysis methods is perfect , experimenters should be able to correct any errors they find ., Therefore , a user-friendly graphical user interface ( GUI ) for visualization and correction of the results is required ., We developed a GUI called RoiEdit3D for visualizing the result of the proposed method and correcting it manually ( S2 Fig ) ., Because RoiEdit3D is based on ImageJ\/Fiji 35 , 36 in MATLAB through Miji 37 , experimenters can use the familiar interface and tools of ImageJ directly ., Developers can extend the functionality using a favorite framework chosen from various options such as ImageJ macros , Java , MATLAB scripts , and C++ languages ., Interface with downstream analyses should be straightforward because the corrected results are saved in the standard MATLAB data format and can be exported to Microsoft Excel ., Three-dimensional images are shown as trihedral figures using the customized Orthogonal View plugin in ImageJ ( S2 Fig ) ., Fitted Gaussian distributions are shown as ellipsoidal regions of interest ( ROIs ) in each view ., The parameters of the Gaussian distributions are shown in the Customized ROI Manager window in tabular form ., The Customized ROI Manager and trihedral figures are linked , and selected ROIs are highlighted in both windows ., When the parameters of the distributions or the names of nuclei are changed in the Customized ROI Manager window , the corresponding ROIs in the trihedral figures are updated immediately ., Least squares fitting with a Gaussian mixture can be applied after ROIs are manually removed or added ., RoiEdit3D can be used for multi-object tracking ., The fitted Gaussian mixture at a time point is used as an initial value for the mixture at the next time point , and a fitting procedure is executed ( Fig 6A ) ., Additionally , the intensities of nuclei can be obtained as parameters of the fitted Gaussian distributions ., We tried to track and measure the fluorescence intensity of nuclei in real time-lapse 3D images ( Data2 ) ., The animal in the image expressed a calcium indicator , so neural activity during stimulation with the sensory stimulus , sodium chloride , could be measured as changes in the fluorescent intensity ., The proposed nucleus detection method was applied to the first time point in the image and found 194 nuclei out of 198 nuclei ., Seventeen false positives and four false negatives were corrected manually using RoiEdit3D ., Then the nuclei in the time-lapse 3D image were tracked by the proposed method ., Most of the nuclei were successfully tracked ., One or more tracking errors occurred in 27 nuclei during 591 frames , and the success rate was 86 . 4% , which is comparable to that in the previous work 1 ., The tracking process takes 19 . 83 sec per frame ( total 3 . 25 hr ) ., The ASER gustatory neuron was successfully identified and tracked in the time-lapse 3D image by the proposed method ( Fig 6B ) ., The ASER neuron reportedly responds to changes in the sodium chloride concentration 38 , 39 ., We identified a similar response of the ASER neuron using the proposed method ( Fig 6C ) ., This result indicates that the proposed method can be used for multi-object tracking and measuring , which is an essential function for whole-brain activity imaging ., Furthermore the proposed method was utilized to measure the fluorescence intensity of nuclei in time-lapse 2D images ( Data 3 ) ., The proposed nucleus detection method was applied to the image for the first time point ( S3 Fig ) ., Data 3 does not contain images of a highly-localized nuclear marker , and therefore the images of calcium indicator that was weakly localized to the nuclei were used instead ., The proposed method found 7 nuclei out of 9 nuclei ., Six false positives and two false negatives were corrected manually using RoiEdit3D ., Then the nuclei were tracked by the proposed method ., All of the nuclei were successfully tracked during 241 time frames ., The ASER neuron was successfully identified and tracked in the 2D images ., The response of the ASER neuron in the 2D images ( S3 Fig ) is similar to that in the 3D images ., This result indicates that the proposed method can be used for multi-object tracking and measuring of 2D images as well as 3D images ., In this article , we proposed a method that accurately detects neuronal nuclei densely distributed in 3D space ., Our GUI enables visualization and manual correction of the results of automatic detection of nuclei from 3D images as well as 2D images ., Additionally , our GUI successfully tracked and measured multiple objects in time-lapse 2D and 3D images ., Thus , the proposed method can be used as a comprehensive tool for analysis of neuronal activity , including whole-brain activity imaging ., Although the microscopy methods for whole-brain activity imaging of C . elegans have been intensively developed in recent years 3 , 18\u201320 , computational image analysis methods were underdeveloped ., In these works , the neuronal nuclei in the whole-brain activity imaging data were detected either manually or automatically by peak detection ., Manual detection is most reliable but time- and labor-consuming , whereas the accuracy of the automatic peak detection is relatively low because of overlooking dark nuclei near bright nuclei ., Our proposed method will reduce the difficulty and improve the accuracy ., Furthermore , the numbers of the neuronal nuclei found or tracked in these four works were less than the real number of neuronal nuclei 3 , 18\u201321 ., The scarcity may be due not only to the experimental limitations such as fluctuation of fluorescent protein expression or low image resolution , but also to the limitations of the image analysis methods that may overlook nuclei ., The proposed method can detect almost all the nuclei in our whole-brain activity imaging data ( Fig 6 ) , suggesting that the proposed method can avoid errors that may be caused by overlooking nuclei , such as erroneous measurements of neural activities and misidentifications of neuron classes ., Thus , our method will be highly useful for the purpose ., Peng and colleagues have intensively developed the computational methods for automatic annotation of cell nuclei in C . elegans 33 , 40 , 41 ., Although their methods successfully annotate cells in many tissue such as body wall muscles and intestine , the methods seem not to be applicable to annotations of head neurons in adult worms , which is highly desired in the field of whole-brain activity imaging 20 ., They pointed out that the positions of neuronal nuclei in adult worms are highly variable 33 and this may be one of the reasons for the difficulty ., The accuracy of detection and segmentation of neuronal nuclei may be another reasons because CellSegmentation3D that was incorporated in their latest annotation framework 33 shows compromised performance in our dataset ( Table 1 , Fig 5 ) ., Our proposed method improves the accuracy of neuronal nucleus detection and will promote developing the automatic annotation methods for the neurons ., It is noteworthy that the method of simultaneous detection and annotation of cells 41 is unique and useful in the studies of C . elegans ., Because the method assigns the positions of reference to the sample image directly and avoid the detection step , the method find cells without overlooking under some conditions , but would not work correctly under the large variation of the numbers or the relative positions of the nuclei , both of which are observed in our dataset ., The optimal method for accurate detection of nuclei will vary depending on the characteristics of the nuclei ., Many conditions such as the visualization method , shape , and distribution of nuclei will affect these characteristics ., In our case , the distributions of the fluorescence intensity of nuclei were similar to Gaussian distributions; thus , we developed an optimal method for such cases ., Even if an original image does not have these characteristics , some preprocessing steps such as applying a Gaussian smoothing filter may enable application of our method to the image ., Although choosing the optimal method and tuning its parameters might be more work than manual identification , the automatic detection method would improve subjectivity and effectivity ., In the field of biology , it is often the case that hundreds or thousands of animals should be analyzed equally well ., In such case , manual detection would be time-consuming and the automatic detection method would be required ., For tracking the nuclei in time lapse images , we can apply the detection method to each time frame separately and then link the detected nuclei between frames ., In this case , some false negatives and false positives would be separately produced for each frame , and they might disrupt the link step , resulting in increase of tracking errors ., On the other hand , in the proposed method , the result of the automatic detection could be corrected manually , resulting in decrease of tracking errors ., The proposed tracking method is a simplistic approach ., Combination with existing excellent tracking methods will likely improve tracking performance of the proposed method ., Cell division and cell death did not occur in our data , but they are fundamental problems in the analysis of embryonic development ., It may be important to improve our method if it is to be applied to these problems so that the method handles such phenomena appropriately ., C . elegans strains JN2100 and JN2101 were used in this study ., Animals were raised on nematode growth medium at 20\u00b0C ., E . coli strain OP50 was used as a food source ., We used three datasets in this study ., Data1 and 2 contain ~200 neuronal nuclei , and Data3 contains 9 nuclei ., The positions of the centers of the nuclei were manually corrected by experimental specialists using the proposed GUI ., The blobs of the nuclei were detected by the conventional method ( Steps 1 & 2 ) ., Under-segmented blobs were detected and split in Step, 3 . The precise positions and sizes of the nuclei were obtained in Step, 4 . The names and parameter values of the filters used in the proposed method are shown in S1 Table ., The performance of proposed method for cell detection was compared with five state-of-the-art methods: Ilastik , FARSight , RPHC , 3D watershed plugin in ImageJ , and CellSegmentation3D ., Ilastik is machine learning-based method and required a training data that was created manually ., The parameters of RPHC was the same as the literature 1 ., The parameters of the other methods were optimized based on F-measure and accuracy ., The parallel displacements of the raw 3D images of 12 animals in Data 1 was corrected , and the methods were applied to the images ., Because FARSight crashed during processing , its command line version ( segment_nuclei . exe ) was used 50 ., The input images for FARSight and CellSegmentation3D were converted to 8-bit images because they could not operate with 32-bit grayscale images ., For CellSegmentation3D , because it could not operate with our whole 3D image , the input images were divided and processed separately ., The comparison was performed and the processing time was measured on the same PC as that used for the proposed method ., All the methods other than CellSegmentation3D might be able to utilize multi-threading ., The centroids of the segmented regions obtained by each program were used as the representative points of the objects ., For the proposed method , the means of the fitted Gaussian distributions ( \u03bck ) were used as the representative points ., The Euclid distances of the representative points and manually pointed Ground Truth were obtained ., If a representative point was nearest-neighbor of a point of Ground Truth and vice versa , the object was regarded as a True Positive ., If only the former condition was met , the Ground Truth was regarded as a False Negative ., If only the latter condition was met , the object was regarded as a False Positive ., We obtained the indices of the performance 12 , 34 , 50 as follows:, Truepositiverate=TPGT ,, Falsepositiverate=FPGT ,, Falsenegativerate=FNGT ,, F\u2212measure=2\u00d7TP2\u00d7TP+FN+FP ,, Accuracy=TPTP+FN+FP ,, where, GT=TP+FN ., GT , TP , FP and FN mean Ground Truth , True Positive , False Positive and False Negative , respectively .","headings":"Introduction, Results, Discussion, Methods","abstract":"To measure the activity of neurons using whole-brain activity imaging , precise detection of each neuron or its nucleus is required ., In the head region of the nematode C . elegans , the neuronal cell bodies are distributed densely in three-dimensional ( 3D ) space ., However , no existing computational methods of image analysis can separate them with sufficient accuracy ., Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces ., To obtain accurate positions of nuclei , we also developed a new procedure for least squares fitting with a Gaussian mixture model ., Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space ., The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection ., Additionally , the proposed method was applied to time-lapse 3D calcium imaging data , and most of the nuclei in the images were successfully tracked and measured .","summary":"To reach the ultimate goal of neuroscience to understanding how each neuron functions in the brain , whole-brain activity imaging techniques with single-cell resolution have been intensively developed ., There are many neurons in the whole-brain images and manual detection of the neurons is very time-consuming ., However , the neurons are often packed densely in the 3D space and existing automatic methods fail to correctly split the clumps ., In fact , in previous reports of whole-brain activity imaging of C . elegans , the number of detected neurons were less than expected ., Such scarcity may be a cause of measurement errors and misidentification of neuron classes ., Here we developed a highly accurate automatic cell detection method for densely-packed cells ., The proposed method successfully detected almost all neurons in whole-brain images of the nematode ., Our method can be used to track multi-objects and enables automatic measurements of the neuronal activities from whole-brain activity imaging data ., We also developed a visualization and correction tool that is helpful for experimenters ., Additionally , the proposed method can be a fundamental technique for other applications such as making wiring diagram of neurons or establishing a cell lineage in embryonic development ., Thus our framework supports effective and accurate bio-image analyses .","keywords":"invertebrates, fluorescence imaging, engineering and technology, caenorhabditis, neuroscience, animals, animal models, caenorhabditis elegans, human factors engineering, model organisms, neuroimaging, research and analysis methods, computer and information sciences, imaging techniques, animal cells, man-computer interface, calcium imaging, graphical user interface, cellular neuroscience, cell biology, computer architecture, neurons, nematoda, biology and life sciences, cellular types, image analysis, organisms, user interfaces","toc":null} +{"Unnamed: 0":1750,"id":"journal.ppat.0030137","year":2007,"title":"A Virtual Look at Epstein\u2013Barr Virus Infection: Biological Interpretations","sections":"Computer simulation and mathematical modeling are receiving increased attention as alternative approaches for providing insight into biological and other complex systems 1 ., An important potential area of application is microbial pathogenesis , particularly in cases of human diseases for which applicable animal models are lacking ., To date , most simulations of viral pathogenesis have tended to focus on HIV 2\u20137 , and employ mathematical models based on differential equations ., None have addressed the issue of acute infection by the pathogenic human herpes virus Epstein\u2013Barr virus ( EBV ) and its resolution into lifetime persistence ., With the ever-increasing power of computers to simulate larger and more complex systems , the possibility arises of creating an in silico virtual environment in which to study infection ., We have used EBV to investigate the utility of this approach ., EBV is a human pathogen , associated with neoplastic disease , that is a paradigm for understanding persistent infection in vivo and for which a readily applicable animal model is lacking ( reviewed in 8 , 9 ) ., Equally important is that EBV infection occurs in the lymphoid system , which makes it relatively tractable for experimental analysis and has allowed the construction of a biological model of viral persistence that accounts for most of the unique and peculiar properties of the virus 10 , 11 ., We are therefore in a position to map this biological model onto a computer simulation and then ask how accurately it represents EBV infection ( i . e . , use our knowledge of EBV to test the validity of the simulation ) and whether the matching of biological observation and simulation output provides novel insights into the mechanism of EBV infection ., Specifically , we can ask if it is possible to identify critical switch points in the course of the disease where small changes in behavior have dramatic effects on the outcome ., Examples of this would be the switch from clinically silent to clinically apparent infection and from benign persistence to fatal infection ( as occurs in fatal acute EBV infection and the disease X-linked lymphoproliferative 12 , for example ) , or to clearance of the virus ., Indeed , is clearance ever possible , or do all infections lead inevitably to either persistence or death ?, Such an analysis would be invaluable ., Not only would it provide insight into the host\u2013virus balance that allows persistent infection , but it would also reveal the feasibility and best approaches for developing therapeutic interventions to diminish the clinical symptoms of acute infection , prevent fatal infection , and\/or clear the virus ., A diagrammatic version of the biological model is presented in Figure 1 ., EBV enters through the mucosal surface of the Waldeyer ring , which consists of the nasopharyngeal tonsil ( adenoids ) , the paired tubal tonsils , the paired palatine tonsils , and the lingual tonsil arranged in a circular orientation around the walls of the throat ., Here EBV infects and is amplified in epithelium ., It then infects na\u00efve B cells in the underlying lymphoid tissue ., The components of the ring are all equally infected by the virus 13 ., EBV uses a series of distinct latent gene transcription programs , which mimic a normal B cell response to antigen , to drive the differentiation of the newly infected B cells ., During this stage , the infected cells are vulnerable to attack by cytotoxic T cells ( CTLs ) 14 ., Eventually , the latently infected B cells enter the peripheral circulation , the site of viral persistence , as resting memory cells that express no viral proteins 15 and so are invisible to the immune response ., The latently infected memory cells circulate between the periphery and the lymphoid tissue 13 ., When they return to the Waldeyer ring they are occasionally triggered to terminally differentiate into plasma cells ., This is the signal for the virus to begin replication 16 , making the cells vulnerable to CTL attack again 14 ., Newly released virions may infect new B cells or be shed into saliva to infect new hosts , but are also the target of neutralizing antibody ., Primary EBV infection in adults and adolescents is usually symptomatic and referred to as infectious mononucleosis ( AIM ) ., It is associated with an initial acute phase in which a large fraction ( up to 50% ) of circulating memory B cells may be latently infected 17 ., This induces the broad T lymphocyte immune response characteristic of acute EBV infection ., Curiously , primary infection early in life is usually asymptomatic ., In immunocompetent hosts , infection resolves over a period of months into a lifelong persistent phase in which \u223c1 in 105 B cells carry the virus 18 ., Exactly how persistent infection is sustained is unclear ., For example , once persistence is established , it is unknown if the pool of latently infected memory B cells is self-perpetuating or if a low level of new infection is necessary to maintain it ., Indeed , we do not know for sure that the pool of latently infected B cells in the peripheral memory compartment is essential for lifetime persistence ., It is even unclear whether the virus actually establishes a steady state during persistence or continues to decay , albeit at an ever slower rate 17 ., In the current study we describe the creation and testing of a computer simulation ( PathSim ) that recapitulates essential features of EBV infection ., The simulation has predictive power and has utility for experiment design and understanding EBV infection ., One practical limitation of available simulation and modeling approaches has been their inaccessibility to the working biologist ., This is often due to the use of relatively unfamiliar computer interfaces and output formats ., To address these issues , we have presented the simulation via a user-friendly visual interface on a standard computer monitor ., This allows the simulation to be launched and output to be accessed and analyzed in a visual way that is simple and easily comprehensible to the non-specialist ., The computer model ( PathSim ) is a representation of the biological model described in the Introduction ., A schematic version of both is shown in Figure 1 ., To simulate EBV infection , we created a virtual environment consisting of a grid that describes a biologically meaningful topography , in this case the Waldeyer ring ( five tonsils and adenoids ) and the peripheral circulation , which are the main sites of EBV infection and persistence ., The tonsils and adenoids were composed of solid hexagonal base units representing surface epithelium , lymphoid tissue , and a single germinal center\/follicle ( Figure 2A\u20132C; Video S1 ) ., Each hexagonal unit had one high endothelial venule ( HEV ) entry point from the peripheral blood and one exit point into the lymphatic system ( Figure 2A ) ., Discrete agents ( cells or viruses ) reside at the nodes ( red boxes ) of the 3-D grid ( white lines ) ., There they can interact with other agents and move to neighboring nodes ., Agents are assessed at regular , specified time intervals as they move and interact upon the grid ., Virtual cells were allowed to leave the Waldeyer ring via draining lymphatics and return via the peripheral blood and HEVs ( Figure 2A and 2B; Video S1 ) as in normal mucosal lymphoid tissue 19 ., A brief summary list of the agents employed in our simulation , and their properties and interactions , is given in Table 1 ., In this report we refer to actual B cells as , for example , \u201cB cells\u201d , \u201clatently infected B cells\u201d , or \u201clytically infected B cells\u201d , and their virtual representations as \u201cvirtual B cells\u201d , \u201cBLats\u201d , or \u201cBLyts\u201d ., Similarly , we refer to actual virus as virions and their virtual counterparts as virtual virus or Vir ., A full description of the simulation , including a complete list of agents , rules , the default parameters that produce the output described below , and a preliminary survey of the extended parameter space is presented in M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ., Here , we will first present a description of how the virtual environment was visualized and then focus on a comparison of simulation output with the known biological behavior of the virus ., Simulation runs were accessed through an information-rich virtual environment ( IRVE ) ( Figures 2 and 3; Videos S1 and S2 ) , which was invoked through a Web interface ., This provided a visually familiar , straightforward context for immediate comprehension of the spatial behavior of the system 20 ., It also allowed specification of parameters , run management , and ready access to data output and analysis ., Figure 3 demonstrates how the time course of infection may be visualized ., Usually the simulation was initialized by a uniform distribution of Vir over the entire surface of the Waldeyer ring , thereby seeding infection uniformly ., However , in the simulation shown in Figure 3A , virtual EBV was uniformly deposited only on the lingual tonsil ., Figure 3B\u20133D shows the gradual spread of virtual infection ( intensity of red color indicating the level of free Vir ) to the adjacent tonsils ., It can be seen in this case that the infection spreads uniformly to all the tonsils at once , implying that it was spreading via BLats returning from the blood compartment and reactivating to become BLyts , rather than spreading within the ring ., Examples of infectious spread between and within the tonsils can also be seen in Video S2 ., In this paper we present a comprehensive model of EBV infection that effectively simulates the overall dynamics of acute and persistent infection ., The fact that this simulation can be tuned to produce the course of EBV infection suggests that it models the basic processes of this disease ., To achieve this , we have created a readily accessible , virtual environment that appears to capture most of the salient features of the lymphoid system necessary to model EBV infection ., Achieving infection dynamics that reflect an acute infection followed by recovery to long-term low-level persistent infection seems to require access of the virus to a blood compartment where it is shielded from immunosurveillance ., Because we cannot perform a comprehensive parameter search ( due to the very large parameter space involved ) , we cannot unequivocally state that the blood compartment is essential ., What is clear though , is that persistence is a very robust feature in the presence of a blood compartment , and that we could not achieve an infection process that even remotely resembles typical persistent EBV infection in its absence ., The areas in which the simulation most closely follows known biology are summarized in Table 2 and include the peak time of infection , 33\u201338 d , compared to the incubation time for AIM of 35\u201350 d 21 ., This predicts that patients become sick and enter the clinic at or shortly after peak infection in the peripheral blood , a prediction confirmed by our patient studies , where the numbers of infected B cells in the periphery always decline after the first visit 17 ., An important feature of a simulation is its predictive power ., Our analysis predicted that access to the peripheral memory compartment is essential for long-term persistence ., This is consistent with recent studies on patients with hyper-IgM syndrome 31 ., Although these individuals lack classical memory cells , they can be infected by EBV; however , they cannot sustain persistent infection and the virus becomes undetectable ., Unfortunately , those studies did not include a sufficiently detailed time course to see if time to virus loss coincided with the simulation prediction of 1\u20132 mo ., Another area where the simulation demonstrated its predictive power was in the dynamics of viral replication ., In the simulation it was unexpectedly observed that the level of Vir production plateaued long before BLats , predicting that the levels of virus shedding , unlike latently infected cells , will have leveled off by the time AIM patients arrive in the clinic ., This prediction , which contradicted the common wisdom that virus shedding should be high and decline rapidly in AIM patients , was subsequently confirmed experimentally ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , D . Thorley-Lawson ( 2007 ) On the dynamics of acute EBV infection and the pathogenesis of infectious mononucleosis ( unpublished data ) and see also 22 , 23 ) ., The simulation also quite accurately reproduces the relatively large variation in virus production over time , compared to the stability of B latent ., This difference is likely a consequence of stochasticity ( random variation ) having a relatively larger impact on virus production ., This is because the number of B cells replicating the virus at any given time is very small , both in reality and the simulation , compared to the number of infected B cells , but the number of virions they release when they do burst is very large ., This difference may reflect on the biological requirements for persistence of the virus since a transient loss in virus production due to stochasticity can readily be overcome through recruitment from the pool of B latents ., However , a transient loss of B latents would mean clearance of the virus ., Hence , close regulation of B latent but not virion levels is necessary to ensure persistent infection ., Although there is now a growing consensus that EBV infects normal epithelial cells in vivo 27\u201329 , the biological significance of this infection remains unclear ., The available evidence suggests that epithelial cell infection may not be required for long-term persistence 25 , 26 , and this is also seen in the simulation ., The alternate proposal is that epithelial infection might play an important role in amplifying the virus , during ingress and\/or egress , as an intermediary step between B cells and saliva ., This is based on the observation that the virus can replicate aggressively in primary epithelial cells in vivo 30 ., In the simulation , epithelial amplification had no significant effect on the ability of Vir to establish persistence ., This predicts that epithelial amplification does not play a critical role in entry of the virus , but leaves open the possibility that it may be important for increasing the infectious dose present in saliva for more efficient infection of new hosts ., The simulation is less accurate in the precise quantitation of the dynamics ., Virtual acute infection resolves significantly more slowly and persistence is at a higher level than in a real infection ., In addition , virtual persistent infection demonstrates clear evidence of oscillations in the levels of infected cells that have not been detected in a real infection ., The most likely explanation for these discrepancies is that we have not yet implemented T cell memory ., Thus , as the levels of virtual infected cells drops , the immune response weakens , allowing Vir to rebound while a new supply of virtual CTLs is generated ., Immunological memory would allow a more sustained T cell response that would produce a more rapid decline of infected cells , lower levels of sustained persistence , and tend to flatten out oscillatory behavior , thus making the simulation more quantitatively accurate ., This is one of the features that will be incorporated into the next version of our simulation ., It remains to be determined what additional features need to be implemented to sharpen the model and also whether and to what extent the level of representation we have chosen is necessary for faithful representation of EBV infection ., Our simulation of the Waldeyer ring and the peripheral circulation was constructed with the intent of modeling EBV infection ., Conversely , our analysis can be thought of as the use of EBV to validate the accuracy of our Waldeyer ring\/peripheral circulation simulation and to evaluate whether it can be applied to other pathogens ., Of particular interest is the mouse gamma herpesvirus MHV68 32 , 33 ., The applicability of MHV68 as a model for EBV is controversial ., Although it also persists in memory B cells 34 , it appears to lack the sophisticated and complicated latency states that EBV uses to access this compartment ., However , one of the simplifications in our simulation is that the details of these different latency states and their transitions are all encompassed within a single concept , the BLat ., We have also assumed a time line whereby a newly infected BLat becomes activated and CTL sensitive , migrates to the follicle , and exits into the circulation , where it is no longer seen by our virtual CTLs ., In essence , we have generalized the process by which the virus proceeds from free virion to the site of persistence in such a way that it may be applicable to both EBV and MHV68 ., Thus , we might expect that the overall dynamics of infection may be similar even though detailed biology may vary ., As a first step to test if this concept had value , we performed an analysis based on studies with MHV68 where it was observed that the levels of infected B cells at persistence were unaffected by the absolute amount of input virus at the time of infection 35 ., When this parameter was varied in the simulation , we saw the same outcome ., This preliminary attempt raises the possibility that the mouse virus may be useful for examining quantitative aspects of EBV infection dynamics ., The last area we wished to investigate was whether we could identify biologically meaningful \u201cswitch\u201d points , i . e . , places in time and space where relatively small changes in critical parameters dramatically affect outcome , for example , switching from persistence to clearance to death ., We have observed one such switch point\u2014reactivation of BLats upon return to the Waldeyer ring\u2014that rapidly switches the infection process from persistence to death ., How this might relate to fatal EBV infection , X-linked lymphoproliferative disease , is uncertain ., However , viral production is a function both of how many B cells initiate reactivation and how efficiently they complete the process ., We believe that most such cells are killed by the immune response before they release virus 16 , so defects in the immune response could allow more cells to complete the viral replication process and give the same fatal outcome ., The ability to find such conditions for switch points could be very useful in the long term for identifying places in the infection process where the virus might be optimally vulnerable to drug intervention ., The easiest place to target EBV is during viral replication; however , it is currently unclear whether viral replication and infection are required for persistence ., It may be that simply turning off viral replication after persistence is established fails to eliminate the virus because the absence of new cells entering the pool through infection is counterbalanced by the failure of infected cells to disappear through reactivation of the virus ., If , however , a drug allowed abortive reactivation , then cells would die without producing infectious virus and new infection would be prevented ., This models the situation that would arise with a highly effective drug or viral mutant that blocked a critical stage in virion production ( e . g . , viral DNA synthesis or packaging ) , so that reactivation caused cell death without release of infectious virus ., A similar effect could be expected with a drug or vaccine that effectively blocked all new infection ., This is another case in which studies with the mouse virus , where non-replicative mutants can be produced and tested , may be informative as to whether and to what extent infection is required to sustain the pool of latently infected B cells and persistence ., The simulation could then be used to predict how effective an anti-viral that blocked replication , or a vaccine that induced neutralizing antibodies , would need to be at reducing new infection in order to cause EBV to be lost from the memory pool ( for a more detailed discussion of this issue see M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ) ., Most modeling of virus infection to date has tended to focus on HIV and use differential equations 2\u20137 ., One such study involved EBV infection 36 , but to our knowledge none outside of our group has addressed the issue studied here of acute EBV infection and how it resolves into lifetime persistence ., In preliminary studies of our own , modeling EBV infection with differential equations that incorporate features common to the HIV models , with parameters physiologically reasonable for EBV did not produce credible dynamics of infection ( K . Duca , unpublished observations ) ., Although we do not exclude the possibility that such models may be useful for simulating EBV , we took an agent-based approach because it is intuitively more attractive to biologists ., Such models are increasingly being recognized as an effective alternative way to simulate biological processes 37\u201339 and have several advantages ., The main advantage is that the \u201cagent\u201d paradigm complies by definition with the discrete and finite character of biological structures and entities such as organs , cells , and pathogens ., This makes it more accurate , from the point of view of scientific modeling ., It is also less abstract since the simulated objects , processes , and interactions usually have a straightforward biological interpretation and the spatial structure of the anatomy can be modeled meticulously ., The stochasticity inherent to chemical and biological processes can be incorporated in a natural way ., Lastly , it is generally much easier to incorporate qualitative or semi-quantiative information into rule sets for discrete models than it is for such data to be converted to accurate rate equations ., The major drawback to agent-based models is that there is currently no mathematical theory that allows for rigorous analysis of their dynamics ., Currently , one simply runs such simulations many times and performs statistical analyses to assess their likely behaviors ., Developing such a mathematical theory remains an important goal in the field ., In summary , we have described a new computer simulation of EBV infection that captures many of the salient features of acute and persistent infection ., We believe that this approach , combined with mouse modeling ( MHV68 ) and EBV studies in patients and healthy carriers , will allow us to develop a more profound understanding of the mechanism of viral persistence and how such infections might be treated and ultimately cleared ., Details of the AIM patient populations tested have been published previously 17 ., Adolescents ( ages 17\u201324 ) presenting to the clinic at the University of Massachusetts at Amherst Student Health Service ( Amherst , Massachusetts , United States ) with clinical symptoms consistent with acute infectious mononucleosis were recruited for this study ., Following informed consent , blood and saliva samples were collected at presentation and periodically thereafter ., Diagnosis at the time of presentation to the clinic required a positive monospot test and the presence of atypical lymphocytes 21 ., Confirmation of primary Epstein\u2013Barr infection required the detection of IgM antibodies to the EBV viral capsid antigen in patient sera 40 ., These studies were approved by the Human Studies Committee at the University of Massachusetts Medical School ( Worcester , Massachusetts , United States ) and by the Tufts New England Medical Center and Tufts University Health Sciences Institutional Review Board ., All blood samples were diluted 1:1 in 1x PBS ., The technique for estimating the absolute number of latently infected B cells in the peripheral blood of patients and healthy carriers of the virus is a real-time PCR\u2013based variation of our previously published technique 17 , the details of which will be published elsewhere ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , et al . ( 2007 ) On the dynamics of acute EBV infection and the origins of infectious mononucleosis ( unpublished data ) ) ., To measure the absolute levels of virus shedding in saliva , individuals were asked to rinse and gargle for a few minutes with 5 ml of water and the resultant wash processed for EBV-specific DNA PCR using the same real-time\u2013based PCR technique ., We have performed extensive studies to standardize this procedure that will be detailed elsewhere ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , et al . ( 2007 ) On the dynamics of acute EBV infection and the origins of infectious mononucleosis ( unpublished data ) ) ., In the simulation , B cells are either uninfected ( BNa\u00efve ) , latently infected ( BLat ) , or replicating virtual virus ( BLyt ) ; we do not distinguish blast and memory B cells ., In the biological model , newly infected B cells in the lymphoepithelium of the Waldeyer ring pass through different latency states , which are vulnerable to attack by cytotoxic T cells ( CTL latent ) ., Subsequently , they become memory B cells that enter the peripheral circulation and become invisible to the immune response by turning off viral protein expression ., In the simulation , all these latency states are captured in the form of a single entity , the BLat ., In addition , the blood circulation and lymphatic system are both represented as abstract entities that only allow for transport of BNa\u00efves and BLats around the body ., Virtual T cells are restricted to the Waldeyer ring ., This simplification is based on the assumption that , in the biological model , EBV-infected cells entering the peripheral circulation are normal and invisible to CTLs , because the virus is inactive , and therefore the peripheral circulation simply acts as an independent pool of and a conduit for B latent ., Operationally , therefore , BLats escape TLats in the simulation simply by entering the peripheral circulation ., Consequently , unlike the biological model , BLats are vulnerable to TLats whenever they reenter the lymph node ., Each agent ( e . g . , Vir or a BNa\u00efve ) has a defined life span , instructions for movement , and functions that depend on which other agents they encounter ( for example , if a Vir encounters a BNa\u00efve , it infects it with some defined probability ) ., The agents , rules , and parameters used are based on known biology wherever possible with simplifications ( see above ) where deemed appropriate ., A brief description and discussion of the agents and their rules is given in Table 1 ., A detailed listing is provided in M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ., At each time point ( 6 min of real time ) , every agent is evaluated and appropriate actions are initiated ., The simulation is invoked through a Web interface ( IRVE; see movies linked to Figure 2 , and 20 ) that allows a straightforward visual , familiar , and scalable context for access to parameter specification , run management , data output , and analysis ., This has the additional advantage that it readily allows comprehension of the spatial behavior of the system ( e . g . , \u201chow does the infection spread ? \u201d ) ., The simulation may also be invoked from the command line ., Through the Web , users can process simulation data for output and analysis by a number of common applications such as Microsofts Excel , University of Marylands TimeSearcher 41 , and MatLab ., We have developed display components that encapsulate multiple-view capabilities and improved multi-scale interface mappings ., The IRVE is realized in the international standard VRML97 language ., The simulation can be rerun and reanalyzed using a normal VCR-type control tool , which allows the operator , for example , to fast forward , pause , rewind , or drag to a different time point , and to play back runs or analyze simulation output dynamically ., In the IRVE , any spatial object ( including the global system ) can be annotated with absolute population numbers ( as a time plot and\/or numeric table ) or proportional population numbers ( as a bar graph ) for any or all of the agents ., Spatial objects themselves can be animated by heat-map color scales ., The intensity of the color associated with each agent is a measure of the absolute level of the agent; so , for example , as the level of free Vir increases , so will the level of intensity of the associated color ( in this case red ) both within the single units and in the entire organ ., In our simulation we manage multiple views of the dynamic population values through a higher order annotation called a PopView ( population view ) ., A PopView is an interactive annotation that provides three complementary representations of the agent population ., The representations can be switched through in series by simple selection ., The default view is a color-coded bar graph where users can get a quick , qualitative understanding of the agent populations in a certain location at that time step ., The second is a field-value pair text panel , which provides numeric readouts of population levels at that time step ., The third is a line graph where the population values for that region are plotted over time ., Because of the large amount of time points and the large number of grid locations , the IRVE manages an integrated information environment across two orders of magnitude: \u201cMacro\u201d and \u201cMicro\u201d scales ., Through the standard VRML application the user has a number of options including free-navigational modes such as: fly , pan , turn , and examine ., This allows users to explore the system , zooming in and out of anatomical structures as desired ., In addition , the resulting visualization space is navigable by predefined viewpoints , which can be visited sequentially or randomly through menu activation ., This guarantees that all content can be accessible and users can recover from any disorientation ., The Visualizer manages Macro and Micro scale result visualizations using proximity-based filtering and scripting of scene logic ., As users approach a given anatomical structure , the micro-scale meshes and results are loaded and synchronized to the time on the users VCR controller .","headings":"Introduction, Results, Discussion, Methods","abstract":"The possibility of using computer simulation and mathematical modeling to gain insight into biological and other complex systems is receiving increased attention ., However , it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is ., Epstein\u2013Barr virus ( EBV ) provides a good candidate to address these issues ., It persistently infects most humans and is associated with several important diseases ., In addition , a detailed biological model has been developed that provides an intricate understanding of EBV infection in the naturally infected human host and accounts for most of the virus diverse and peculiar properties ., We have developed an agent-based computer model\/simulation ( PathSim , Pathogen Simulation ) of this biological model ., The simulation is performed on a virtual grid that represents the anatomy of the tonsils of the nasopharyngeal cavity ( Waldeyer ring ) and the peripheral circulation\u2014the sites of EBV infection and persistence ., The simulation is presented via a user friendly visual interface and reproduces quantitative and qualitative aspects of acute and persistent EBV infection ., The simulation also had predictive power in validation experiments involving certain aspects of viral infection dynamics ., Moreover , it allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence , clearance , or death ., Lastly , we were able to identify parameter sets that reproduced aspects of EBV-associated diseases ., These investigations indicate that such simulations , combined with laboratory and clinical studies and animal models , will provide a powerful approach to investigating and controlling EBV infection , including the design of targeted anti-viral therapies .","summary":"The possibility of using computer simulation and mathematical modeling to gain insight into biological systems is receiving increased attention ., However , it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is ., Epstein\u2013Barr virus ( EBV ) provides a good candidate to address these issues ., It persistently infects most humans and is associated with several important diseases , including cancer ., We have developed an agent-based computer model\/simulation ( PathSim , Pathogen Simulation ) of EBV infection ., The simulation is performed on a virtual grid that represents the anatomy where EBV infects and persists ., The simulation is presented on a computer screen in a form that resembles a computer game ., This makes it readily accessible to investigators who are not well versed in computer technology ., The simulation allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence , clearance , or death , and identify conditions that reproduce aspects of EBV-associated diseases ., Such simulations , combined with laboratory and clinical studies and animal models , provide a powerful approach to investigating and controlling EBV infection , including the design of targeted anti-viral therapies .","keywords":"agent based model, infectious diseases, epstein-barr virus, computer simulation, pathology, virology, immunology, dynamics of infection, computational biology","toc":null} +{"Unnamed: 0":2310,"id":"journal.pcbi.1006716","year":2019,"title":"State-aware detection of sensory stimuli in the cortex of the awake mouse","sections":"The large majority of what we know about sensory cortex has been learned by averaging the response of individual neurons or groups of neurons across repeated presentations of sensory stimuli ., However , multiple studies in the last three decades have clearly demonstrated that sensory-evoked activity in primary cortical areas varies across repeated presentations of a stimulus , particularly when the sensory stimulus is weak or near the threshold for sensory perception 1\u20133 , and have suggested that this is an equally important aspect of sensory coding as the average response 4\u20136 ., Variability is thought to arise from a complex network-level interaction between sensory-driven synaptic inputs and ongoing cortical activity , and single-trial response variability is partially predictable from the ongoing activity at the time of stimulation ., A large body of work has focused on characterizing this relationship between notions of cortical \u201cstate\u201d and sensory-evoked responses 7\u201313 , establishing some simple models of local cortical dynamics 14 ., Less is known about the impact of this relationship for downstream circuits ( though see 15 , 16 ) ., As an example , consider the detection of a sensory stimulus , which has been foundational in the human 17\u201322 and non-human primate psychophysical literature 23 , 24 and serves as one of the most widely utilized behavioral paradigms in rodent literature 25\u201327 ., In an attempt to link the underlying neural variability to behavior , the principal framework for describing sensory perception of stimuli near the physical limits of detectability is signal detection theory 28 ., A key prediction of signal detection theory is that , on single trials , detection of the stimulus is determined by whether the neural response to the stimulus crosses a threshold ., Particularly large responses would be detected but smaller responses would not , so variability in neural responses would lead to , and perhaps predict , variability in the behavioral response ., From the perspective of an ideal observer , if variability in the sensory-evoked response can be forecasted using knowledge of cortical state , the observer could potentially make better inferences , but in traditional ( state-blind ) observer analysis , the readout of the ideal observer is not tied to the ongoing cortical state ., In this work , using network activity recordings from the whisker sensitive region of the primary somatosensory cortex in the awake mouse , we develop a data-driven framework that predicts the trial-by-trial variability in sensory-evoked responses in cortex by classifying ongoing activity into discrete states that are associated with particular patterns of response ., The classifier takes as inputs features of network activity that are known to be predictive of single-trial response from previous studies 9 , 14 , as well as more complex spatial combinations of such features across cortical layers , to generate ongoing discrete classifications of cortical state ., We optimize the performance of this state classifier by systematically varying the selection of predictors ., Finally , embedding this classification of state in a state-aware ideal observer analysis of the detectability of the sensory-evoked responses , we analyze a downstream readout that changes its detection criterion as a function of the current state ., We find that state-aware observers outperform state-blind observers and , further , that they equalize the detection accuracy across states ., Downstream networks in the brain could use such an adaptive strategy to support robust sensory detection despite ongoing fluctuations in sensory responsiveness during changes in brain state ., The foundation upon which the state-aware observer is constructed is a prediction of the sensory-evoked cortical response ., This prediction is based on classifying elements of the ongoing , pre-stimulus activity into discrete \u201cstates , \u201d and the goal is to find the features of ongoing activity and the classification rules that generate the best prediction of sensory-evoked responses ., Treating this as a discrete problem was a methodological choice motivated by the rationale that such an approach could find rules that are not linear in the features of ongoing activity and could lend more flexibility in the rules relating features of ongoing activity to variability in the response ., The features of ongoing activity include the power spectrum of pre-stimulus LFP and the instantaneous \u201cLFP activation\u201d ( Fig 2A ) ., To describe sensory-evoked responses , we define a parameterization of the LFP response using principal components analysis ( Fig 2B ) ., The state classifier is a function that takes as inputs features of pre-stimulus LFP and produces an estimate of the principal component ( PC ) weights and thus of the single-trial evoked response ( Fig 2C ) ., In the following sections , we describe this process in detail ., Next , within the general class of pre-stimulus features considered\u2013power ratio and LFP activation\u2013we optimized several choices: the range of frequencies used to compute the power ratio; the cortical depth from which the ongoing LFP signal is taken; and possible combinations of LFP signals across the cortical depth ., Changes in pre-stimulus features resulted in changes in the boundaries between states , and ultimately in changes in prediction performance ., First , we varied the bounds of the low-frequency range ( \u201cL range\u201d , Fig 3A ) ., The increase in fVE was on average 0 . 09 \u00b1 0 . 05 ( N = 11 recordings ) ( Fig 3B; classifier boundaries shown in S2 Fig ) , with a significant increase in 10 of 11 recordings ( Fig 3C , asterisks ) ., We found that the optimal L range could extend to frequencies up to 40 Hz ( Fig 3C ) , with the median bounds of the optimal L being from 1 to 27 Hz ., Using for each recording the power ratio based on the optimized range of low-frequency power ( Fig 3 ) , we next determined where along the cortical depth the most predictive activity was and whether taking spatial combinations of LFP activity could improve the prediction ., Note that in this analysis , the channel for the stimulus-evoked response was held fixed ( L4 ) and thus the parameterization of the evoked response using principal components did not change , but the pre-stimulus channel was varied ., For each recording , we thus built a series of classifiers , using single- and multi-channel LFP activity from across the array ( Fig 4A , S3 Fig ) , which again were optimized for prediction of the single-trial L4 sensory-evoked response ., Classifiers built from a single channel of LFP performed best when the channel was near L4 ( Fig 4B , single example; Fig 4C , average profile ) ., Because the LFP represents a volume-conducted signal , we also examined the current source density ( CSD ) 34\u201336 , estimated on single trials using the kernel method 37 ., There was no improvement in fVE using CSD to build classifiers ( fVE difference , CSD minus LFP: -0 . 07; range: ( -0 . 12 , -0 . 01 ) ) ., For each recording , we defined an optimal classifier channel based on the spatial profile of fVE for single-channel predictors ( Fig 4B; S3 Fig ) ., In the \u201cpair\u201d combination , we paired the optimal classifier channel with each of the other possible 31 channels ( Fig 4B; green dashed line ) ., We optimized the classifier in the 3-dimensional space defined by power ratio ( on the optimal channel only ) and LFP activation from each of the two channels and compared the fVE to that obtained using the optimal classifier channel only ( Fig 4D ) ., We found no improvement in the prediction using the pair combination compared to using the optimal channel alone ( Fig 4D , mean fVE difference: 0 . 00 \u00b1 0 . 01; 0\/11 recordings with significant change , pair vs . single ) or using more complex combinations of channels ( S3 Fig ) ., To summarize , we optimized classifiers based on pre-stimulus features to predict single-trial sensory-evoked LFP responses in S1 cortex of awake mice ., We found that the classifier performance was improved by changing the definition of the power ratio ( L\/W ) such that the low-frequency range ( L ) extended from 1 Hz to 27 Hz , depending on the recording , which differed from the range typically used from anesthetized recordings in S1 ( 1\u20135 Hz ) 8 , 9 ., We also found that the most predictive pre-stimulus LFP activation was near layer 4 ., After establishing a clear enhanced prediction of the single-trial stimulus-evoked response within the LFP by considering the pre-stimulus activity , we investigated the impact of this relationship on the detection of sensory stimuli from cortical LFP activity using a state-aware ideal observer analysis ., We first considered a simple matched-filter detection scheme 38 in which the ideal observer operated by comparing single-trial evoked responses to the typical shape of the sensory evoked response ( Methods , Detection ) ., The matched filter was defined by the trial-average evoked LFP response , and this filtered the raw LFP ( Fig 5A ) to generate the LFP score ( Fig 5B ) ., For the state-blind observer , a detected event was defined as a peak in the LFP score that exceeded a fixed threshold ( Fig 5B , stars ) ., The LFP score distributions from time periods occurring during known stimulus-evoked responses and from the full spontaneous trace were clearly distinct but overlapping ( Fig 5C ) , and detected events ( Fig 5B , stars ) included both \u201chits\u201d ( detection of a true sensory input ) and \u201cfalse alarms\u201d ( detection of a spontaneous fluctuation as a sensory input ) ., Next , using the state classifier constructed in the first half of the paper , we analyzed the performance of a state-aware observer on a reserved set of trials , separate from those used for fitting and optimizing the state classifiers ( Methods ) ., Specifically , using the optimized state classifier ( Figs 3 and 4 ) , we continuously classified \u201cstate\u201d at each time point in the recording ( Fig 5D ) ., The state-aware observer detects events exceeding a threshold , which changed as a function of the current state ( Fig 5E ) ., Instead of a single LFP score distribution , we now have one for each predicted state ( Fig 5F ) , leading to many possible strategies for setting the thresholds for detecting events across states ., In general , the overall hit rate and false alarm rate will depend on hits and false alarms in each individual state ( Fig 6A and 6B for single example; S4 and S5 Figs show all recordings ) , as well as the overall fraction of time spent in each state ( Fig 6A , inset ) ., We walk through the analysis for a single example , selected as one of the clearest examples of how state-aware detection worked ., While this example recording shows a relatively large improvement , it is not the recording with the largest improvement , and , moreover , the corresponding plots for all recordings are shown in S4 and S5 Figs ., To compare between traditional ( state-blind ) and state-aware observers , we compared hit rates at a single false alarm rate , determined for each recording as the false alarm rate at which 80%-90% detection was achieved by a state-blind ideal observer ., To select thresholds for the state-aware observer , we systematically varied the thresholds in state 1 and state 3 , while adjusting the state-2 threshold such that average false alarm rate was held constant ., For each combination of thresholds , we computed the overall hit rate ( Fig 6C ) ., For the example recording highlighted in Fig 6 , the state-aware observer ( hit rate: 96% ) outperformed the traditional one ( hit rate: 90% ) ., This worked because the threshold in state 3 could be increased with very little decrease in the hit rate ( Fig 6B ) , and this substantially decreased the false alarm rate in state 3 ( Fig 6A ) ., Because the overall false alarm rate is fixed , this meant more false alarms could be tolerated in states 1 and, 2 . Consequently , thresholds in states 1 and 2 could be decreased , which increased their hit rates ., Across recordings , we found that the state-aware observer outperformed the state-blind observer in 9 of 11 recordings ( Fig 6D; S4 and S5 Figs ) ., Hit rates slightly but significantly increased from a baseline of 81% for the state-blind observer to 84% for state-aware detection , or an average change of +3 percentage points ( SE: 3%; signed-rank test , p < 0 . 01 , N = 11 ) ., The overall change in hit rate reflects both the fraction of time spent in each state ( some fixed feature of an individual mouse ) and the changes in state-dependent hit rates ., To separate these factors , we analyzed the hit rate of the state-blind and state-aware observers by computing , for each observer , the hit rate conditioned on each pre-stimulus state ( Fig 6E ) ., For this recording , the state-blind observer had very low hit rate in state 1 and high hit rates in states 2 and, 3 . In comparison , hit rates were similar across the three state for the state-aware observer ( Fig 6D ) ., Thus , in state 1 ( smallest responses , blue ) , we observed a large increase in the hit rate depending on whether the observer used state-blind or state-aware thresholds ., Averaged across all recordings , the state-1 hit rates increased from 60% to 76% , which is a relative increase of 26% ( SE 11% ) ., Because this is weighted by the fraction of time spent in state 1 , the overall impact on the hit rate is smaller ., Hit rates increased slightly on average in state 2 ( + 2% , SE 4% ) and decreased slightly in state 3 ( -7% , SE 9% ) ., The net impact of this is that across the majority of recordings , the cross-state range of hit rates for the state-blind ideal observer was much larger than that for the state-aware ideal observer ( Fig 6D and 6F; 19% , average state-blind minus state-aware hit rate range in percentage points ( SE: 5% ) ; p < 0 . 01 , signed-rank test , N = 11 ) ., Thus , while the overall differences between state-aware and state-blind hit rates are modest , the state-aware observer has more consistent performance across all pre-stimulus states than a state-blind observer ., Due to the rapid development of tools that enable increasingly precise electrophysiology in the awake animal , there is a growing appreciation that the \u201cawake brain state\u201d encompasses a wide range of different states of ongoing cortical activity , and that this has a large potential impact on sensory representations during behavior 39\u201344 ., Here , we constructed a framework for the prediction of highly variable , single-trial sensory-evoked responses in the awake mouse based on a data-driven classification of state from ongoing cortical activity ., In related work , past studies have used some combination of LFP\/MUA features to predict future evoked MUA response 9 , 14 ., We used a similar approach for state classification and response prediction in cortical recordings in the awake animal , extending this to allow complex combinations of ongoing activity in space and different features of the pre-stimulus power spectrum as predictors ., We found that simple features of pre-stimulus activity sufficed to enable state classification that yielded single-trial prediction of sensory evoked responses ., These predictive features were analogous to the synchronization and phase variables found in previous studies 8 , 9 , 14 , though we found a revised definition of synchronization was more predictive ., In particular , we found that the very low-frequency band of the LFP power spectrum ( 1\u20135 Hz ) was less predictive of single-trial evoked responses in our recordings than a wider band ( e . g . 1 to 27 Hz ) ., This is consistent with findings from a recent study 40 that surveyed the power spectrum of LFP across different behavioral states in the awake animal and demonstrated differences in the power spectrum between quiet and active wakefulness up to 20 Hz ., While we have focused on the problem of state classification and prediction from the perspective of an internal observer utilizing neural activity alone , future work could investigate whether the state classifier is also tracking external markers of changes in state , such as those indicated by changes in pupil diameter 42 , 45 , whisking 40 , or other behavioral markers in the awake animal ., We fit classifiers for each individual recording rather than pooling responses across animals and recording sessions ., The structure of the classification rules was similar across recordings , showing that the relationship between pre-stimulus features and evoked responses is robust ., This suggests that a single classifier could be fit , once inputs and outputs are normalized to standard values ., This normalization could be accomplished by determining the typical magnitude of LFP sensory responses and rescaling accordingly ., Moreover , the ordered structure of the classification rules suggests that a continuous model of state , rather than a discrete model , would have worked as well ., To implement as a continuous model , one would fit a regression of the evoked response coefficients using as independent variables LFP activation and power ratio ., Judging by the classification boundaries shown in S1 and S2 Figs , keeping only linear terms in activation and power ratio would give a good prediction ., In its current formulation , this framework utilizes only the features of ongoing cortical activity that are reflected in the LFP in order to classify state and predict the evoked LFP response ., Both as features underlying the state classifier and as the sensory-evoked response being predicted , LFP must be interpreted carefully , as the details of how underlying sinks and sources combine depend on the local anatomy and population spiking responses 46 ., In barrel cortex , the early whisker-evoked LFP response ( 0 to 25 ms ) is characterized by a current sink in L4 initially driven by thalamic inputs to cortex , but also reflecting cortical sources of activity: the evoked LFP is highly correlated with the layer-4 multi-unit activity response 47 , 48 ., We restricted our predictive framework to the high degree of variability in this initial response ., It remains to determine how LFP response variability is reflected in the sensory-evoked single-unit cortical spiking activity patterns ., Further , regarding LFP as a predictor used by the state classifier , LFP is a mesoscopic marker of cortical state that neglects finer details of cortical state organization ., In addition to establishing whether better predictions are made from more detailed representations of cortical state , it is an interesting question how microcircuit spiking dynamics are related to the mesoscopic markers of cortical state , or how much can be inferred about population spiking dynamics from the LFP ., Finally , thalamic and cortical activity are tightly linked , and the results presented here may also reflect variations in ongoing thalamic activity ., Disentangling thalamic and cortical sources of variability in the evoked response will require paired recordings and perturbative experimental approaches designed to address issues of causality ., In the second part of the paper , we used ideal observer analysis to show that state-aware observers , with oracle knowledge of the spontaneous , ongoing state fluctuations informative of the single-trial sensory-evoked response , can out-perform a state-blind ideal observer ., Our analysis relied on classification of the markers of ongoing state ., This is not to suggest that this specific estimation takes place in the brain , but instead could potentially be achieved dynamically by a downstream network through the biophysical properties of the circuitry ., Theoretically , the gain and threshold of such a readout neuron or network could be dynamically modified on the basis of the ongoing activity as a biophysical manifestation of the adaptive state-aware ideal observer , though the identification of specific mechanisms was beyond the scope of the current study ., We found that the state-aware observer had higher accuracy than the traditional , state-blind observer , but the absolute gain in hit rate ( at fixed false alarm rate ) averaged across all states was modest ., When pre-stimulus states were analyzed separately , however , we found that accuracy in the low-response state was substantially higher for the state-aware observer , where there was a relative increase of 25% in the hit rate for this state ., Because small sensory responses are predictable from the ongoing activity , transiently lowering the threshold for detection resulted in more \u201chits\u201d in the low-response state , while false alarms in high-response states could be avoided by raising the threshold when the state changed ., However , the cortical activity was classified to be in this particular state approximately 20% of the time , and thus had a relatively modest impact on the overall performance , averaged across all states ., What is not currently known is the overall statistics associated with the state transitions ( i . e . distribution of time spent in each state , rate of transitions , etc . ) during engagement within perceptual tasks , but in any case , what we observe here is a normalization of detectability across brain states ., For near-threshold sensory perception , the signal detection theory framework asserts that single-trial responses are predictive of perceptual report 28 ., While there are many previous studies that seem to support this 49\u201352 , several animal studies have called this into question , showing that primary sensory neural activity does not necessarily co-vary with perceptual report on simple detection tasks 23 , 25 , 27 ., It is possible that the conflicting findings in the literature are due to behavioral state effects , and that more consistent reports would emerge if the analysis of the neural activity incorporated elements of the state-classification approach developed here ., Our results show how single-trial response size can be decoupled from perception , if a downstream network predicts and then accounts for the variability in sensory responses ., Moreover , our analysis showed that some states of pre-stimulus activity should be associated with higher or lower performance on a near-threshold detection task , which has been observed in near-threshold detection studies in the rodent 26 and monkey 24 ., It should be noted that there is controversy regarding the relevance of primary sensory cortex in simple behavioral tasks 53 , 54 , but this is likely related to the task difficulty 55 , where a large body of literature has resolutely shown that processing in primary cortical areas is critical for difficult tasks that increase cognitive load , and we suspect that near threshold stimuli such as those shown here fall in that category ., Many studies have demonstrated a link between pre-stimulus cortical activity and perceptual report on near-threshold detection tasks in humans 17 , 18 , 56\u201359 ., Currently , it is not entirely clear how far the parallel in cortical dynamics between the mouse and human can be taken ., One challenge is that connecting invasive recordings in the mouse to non-invasive recordings in human studies is non-trivial ., Here , at the level of LFP , we observed similarities between species in the interaction between ongoing and evoked activity: the largest evoked responses tended to be preceded by positive deflection in the LFP , and the smallest evoked responses were preceded by negative deflection in the LFP ., This relationship , the negative interaction phenomenon , points to a non-additive interaction between ongoing and evoked activity and is also observed in both invasive and non-invasive recordings in humans 33 , 56 , 60 , 61 ., Establishing parallels between cortical dynamics on a well-defined task , such as sensory detection , between humans and animal models is an important direction for future studies ., In summary , we have developed a framework for the prediction of variable single-trial sensory-evoked responses and shown that this prediction , based on cortical state classification , can be used to enhance the readout of sensory inputs ., Utilizing state-dependent decoders for brain-machine interfaces has been shown to greatly improve the readout of motor commands from cortical activity 62 , 63 , at the very end-stage of cortical processing ., Others have raised the possibility of using state knowledge to \u2018cancel out\u2019 variability in sensory brain-machine interfaces , with the idea that this could generate a more reliable and well-controlled cortical response 64 , 65 , which would in theory transmit information more reliably ., This is intriguing , though our analysis suggests a slightly different interpretation: if downstream circuits also have some knowledge of state , canceling out encoding variability may not be the appropriate goal ., Instead , the challenge is to target the response regime for each state ., This could be particularly relevant if structures controlling state , including thalamus 66 , are upstream of the cortical area in which sensory BMI stimulation occurs ., The simple extension of signal detection theory we explored suggests a solution to the problem that the brain faces at each stage of processing: how to adaptively read out a signal from a dynamical system constantly generating its own internal activity ., All procedures were approved by the Institutional Animal Care and Use Committee at the Georgia Institute of Technology ( Protocol Number A16104 ) and were in agreement with guidelines established by the National Institutes of Health ., Six nine to twenty-six week old male C57BL\/6J mice were used in this study ., Mice were maintained under 1\u20132% isoflurane anesthesia while being implanted with a custom-made head-holder and a recording chamber ., The location of the barrel column targeted for recording was functionally identified through intrinsic signal optical imaging ( ISOI ) under 0 . 5\u20131% isoflurane anesthesia ., Recordings were targeted to B1 , B2 , C1 , C2 , and D2 barrel columns ., Mice were habituated to head fixation , paw restraint and whisker stimulation for 3\u20137 days before proceeding to electrophysiological recordings ., Following termination of the recordings , animals were anesthetized ( isoflurane , 4\u20135% , for induction , followed by a euthanasia cocktail injection ) and perfused ., Local field potential was recorded using silicon probes ( A1x32-5mm-25-177 , NeuroNexus , USA ) with 32 recording sites along a single shank covering 775 \u03bcm in depth ., The probe was coated with DiI ( 1 , 1\u2019-dioctadecyl-3 , 3 , 3\u20323\u2019-tetramethylindocarbocyanine perchlorate , Invitrogen , USA ) for post hoc identification of the recording site ., The probe contacts were coated with a PEDOT polymer 67 to increase signal-to-noise ratio ., Contact impedance measured between 0 . 3 MOhm and 0 . 7 MOhm ., The probe was inserted with a 35\u00b0 angle relative to the vertical , until a depth of about 1000 \u03bcm ., Continuous signals were acquired using a Cerebus acquisition system ( Blackrock Microsystems , USA ) ., Signals were amplified , filtered between 0 . 3 Hz and 7 . 5 kHz and digitized at 30 kHz ., Mechanical stimulation was delivered to a single contralateral whisker corresponding to the barrel column identified through ISOI using a galvo motor ( Cambridge Technologies , USA ) ., The galvo motor was controlled with millisecond precision using a custom software written in Matlab ( Mathworks , USA ) ., The whisker stimulus followed a sawtooth waveform ( 16 ms duration ) of various velocities ( 1000 deg\/s , 500 deg\/s , 250 deg\/s , 100 deg\/s ) delivered in the caudo-rostral direction ., To generate stimuli of different velocity , the amplitude of the stimulus was changed while its duration remained fixed ., Whisker stimuli of different velocities were randomly presented in blocks of 21 stimuli , with a pseudo-random inter-stimulus interval of 2 to 3 seconds and an inter-block interval of a minimum of 20 seconds ., The total number of whisker stimuli across all velocities presented during a recording session ranged from 196 to 616 stimuli ., For analysis , the LFP was down-sampled to 2 kHz ., The LFP signal entering the processing pipeline is raw , with no filtering beyond the anti-aliasing filters used at acquisition , enabling future use of these methods for real-time control ., Prior to the analysis , signal quality on each channel was verified ., We analyzed the power spectrum of LFP recorded on each channel for line noise at 60 Hz ., In some cases , line noise could be mitigated by fitting the phase and amplitude of a 60-Hz sinusoid , as well as harmonics up to 300 Hz , over a 500-ms period in the pre-stimulus epoch , then extrapolating the sinusoid over the stimulus window and subtracting ., A small number of channels displayed slow , irregular drift ( 2 or 3 of 32 channels ) and these were discarded ., All other channels were used ., Current source density ( CSD ) analysis was used for two different purposes: first , to functionally determine layers based on the average stimulus-evoked response , and second , to analyze the pre-stimulus activity ( in single trials ) to localize sinks and sources generating the predictive signal ., We describe the general method used here ., Prior to computing the current source density ( CSD ) , each channel was scaled by its standard deviation to normalize impedance variation between electrodes ., We then implemented the kernel CSD method 37 to compute CSD on single trials ., This method was chosen because it accommodates irregular spacings between electrodes , which occurs when recordings on a particular contact do not meet quality standards outlined above ., To determine the best values for the kernel method parameters ( regularization parameter , \u03bb; source extent in x-y plane , r; and source extent in z-plane , R ) we followed the suggestion of Potworowski ( 2012 ) and selected the parameter choices that minimize error in the reconstruction of LFP from the CSD ., These parameters were similar across recordings , so for all recordings we used: \u03bb = 0 . 0316; r = 200\u03bcm; R = 37 . 5\u03bcm ., The trial-averaged evoked response was computed on each trial by subtracting the pre-stimulus baseline ( average over 200 ms prior to stimulus delivery ) and computing the average across trials ., The CSD of this response profile was computed as described above ., The center of layer 4 was determined by finding the largest peak of the trial-averaged evoked LFP response as well as the location of the first , large sink in the trial-averaged sensory-evoked CSD response ., We assume a width of 205 \u03bcm for layer 4 , based on published values for mice 32 ., The matched filter ideal observer analysis 38 is implemented as follows ., The score s, ( t ) is constructed by taking the dot product of the evoked responses yt with a filter matched to the average evoked response:, s, ( t ) =yt\u2219\u03be0, This is equivalent to computing the sum, s, ( t ) =\u2211t=1N\u03be ( x ( t+t ) -x, ( t ) ) \u03be ( t ), In the standard encoding model , if \u03b7 is zero-mean white noise , this gives a signal distribution, P, ( s ) ~N ( \u2225\u03be0\u2225 , \u03c32 ), where \u03c32=\u2225\u03be0\u22252\u03c3\u03b72 and a noise distribution with mean 0 ., In practice , we do not parameterize the distribution , because \u03b7 is not uncorrelated white noise , and work from the score distribution directly ., For the state-aware decoder , we use the prediction \u03b1^t , k of evoked responses, yt=\u03be0+\u2211k=1NC\u03b1^t , k\u03bek+\u03b7, This changes the score to, s, ( t ) =|\u03be0|2+\u2211k\u03b1^t , k\u03bek\u2219\u03be0+\u03b7\u2219\u03be0, Typically , one of the first two PCs ( \u03be1 or \u03be2 ) has a very similar shape to \u03be0 , while the other one has both positive and negative components ( Fig 2 , S1 and S2 Figs ) ., For the state-aware threshold , we use state predictions for the component that is more similar to \u03be0 , as indicated in S1 and S2 Figs ., An event is detected at time t for threshold \u03b8 when s, ( t ) > \u03b8 is a local maximum that is separated from the nearest peak by at least 15 ms and has a minimum prominence ( i . e . drop in s before encountering another peak that was higher than the original peak ) of |\u03be","headings":"Introduction, Results, Discussion, Methods","abstract":"Cortical responses to sensory inputs vary across repeated presentations of identical stimuli , but how this trial-to-trial variability impacts detection of sensory inputs is not fully understood ., Using multi-channel local field potential ( LFP ) recordings in primary somatosensory cortex ( S1 ) of the awake mouse , we optimized a data-driven cortical state classifier to predict single-trial sensory-evoked responses , based on features of the spontaneous , ongoing LFP recorded across cortical layers ., Our findings show that , by utilizing an ongoing prediction of the sensory response generated by this state classifier , an ideal observer improves overall detection accuracy and generates robust detection of sensory inputs across various states of ongoing cortical activity in the awake brain , which could have implications for variability in the performance of detection tasks across brain states .","summary":"Establishing the link between neural activity and behavior is a central goal of neuroscience ., One context in which to examine this link is in a sensory detection task , in which an animal is trained to report the presence of a barely perceptible sensory stimulus ., In such tasks , both sensory responses in the brain and behavioral responses are highly variable ., A simple hypothesis , originating in signal detection theory , is that perceived inputs generate neural activity that cross some threshold for detection ., According to this hypothesis , sensory response variability would predict behavioral variability , but previous studies have not born out this prediction ., Further complicating the picture , sensory response variability is partially dependent on the ongoing state of cortical activity , and we wondered whether this could resolve the mismatch between response variability and behavioral variability ., Here , we use a computational approach to study an adaptive observer that utilizes an ongoing prediction of sensory responsiveness to detect sensory inputs ., This observer has higher overall accuracy than the standard ideal observer ., Moreover , because of the adaptation , the observer breaks the direct link between neural and behavioral variability , which could resolve discrepancies arising in past studies ., We suggest new experiments to test our theory .","keywords":"single channel recording, medicine and health sciences, engineering and technology, statistics, somatosensory cortex, signal processing, matched filters, brain, social sciences, neuroscience, signal filtering, multivariate analysis, animal anatomy, mathematics, membrane electrophysiology, bioassays and physiological analysis, zoology, research and analysis methods, mathematical and statistical techniques, principal component analysis, signal detection theory, electrophysiological techniques, animal physiology, psychology, anatomy, vibrissae, biology and life sciences, sensory perception, physical sciences, statistical methods","toc":null} +{"Unnamed: 0":1998,"id":"journal.pcbi.1005524","year":2017,"title":"A mathematical model coupling polarity signaling to cell adhesion explains diverse cell migration patterns","sections":"Rho GTPases are central regulators that control cell polarization and migration 15 , 16 , embedded in complex signaling networks of interacting components 17 ., Two members of this family of proteins , Rac1 and RhoA , have been identified as key players , forming a central hub that orchestrates the polarity and motility response of cells to their environment 18 , 19 ., Rac1 ( henceforth \u201cRac\u201d ) works in synergy with PI3K to promote lamellipodial protrusion in a cell 16 , whereas RhoA ( henceforth \u201cRho\u201d ) activates Rho Kinase ( ROCK ) , which activates myosin contraction 20 ., Mutual antagonism between Rac and Rho has been observed in many cell types 19 , 21 , 22 , and accounts for the ability of cells to undergo overall spreading , contraction , or polarization ( with Rac and Rho segregated to front versus rear of a cell ) ., The extracellular matrix ( ECM ) is a jungle of fibrous and adhesive material that provides a scaffold in which cells migrate , mediating adhesion and traction forces ., ECM also interacts with cell-surface integrin receptors , to trigger intracellular signaling cascades ., Important branches of these pathways are transduced into activating or inhibiting signals to Rho GTPases ., On one hand , ECM imparts signals to regulate cell shape and cell motility ., On the other hand , the deformation of a cell affects its contact area with ECM , and hence the signals it receives ., The concerted effect of this chemical symphony leads to complex cell behavior that can be difficult to untangle using intuition or verbal arguments alone ., This motivates our study , in which mathematical modeling of GTPases and ECM signaling , combined with experimental observations is used to gain a better understanding of cell behavior , in the context of experimental data on melanoma cells ., There remains the question of how to understand the interplay between genes ( cell type ) , environment ( ECM ) and signaling ( Rac , Rho , and effectors ) ., We and others 19 , 21\u201327 have previously argued that some aspects of cell behavior ( e . g . , spreading , contraction , and polarization or amoeboid versus mesenchymal phenotype ) can be understood from the standpoint of Rac-Rho mutual antagonism , with fine-tuning by other signaling layers 28 ., Here we extend this idea to couple Rac-Rho to ECM signaling , in deciphering the behavior of melanoma cells in vitro ., There are several overarching questions that this study aims to address ., In experiments of Park et al . 11 melanoma cells were cultured on micro-fabricated surfaces comprised of post density arrays coated with fibronectin ( FN ) , representing an artificial extracellular matrix ., The anisotropic rows of posts provide inhomogeneous topographic cues along which cells orient ., In 11 , cell behavior was classified using the well-established fact that PI3K activity is locally amplified at the lamellipodial protrusions of migrating cells 36 ., PI3K \u201chot spots\u201d were seen to follow three distinct patterns about the cell perimeters: random ( RD ) , oscillatory ( OS ) , and persistent ( PS ) ., These classifications were then associated with three distinct cell phenotypes: persistently polarized ( along the post-density axis ) , oscillatory with two lamellipodia at opposite cell ends oscillating out of phase ( protrusion in one lamellipod coincides with retraction of the other , again oriented along the post-density axis ) , and random dynamics , whereby cells continually extend and retract protrusions in random directions ., The fraction of cells in each category was found to depend on experimental conditions ., Here , we focus on investigating how experimental manipulations influence the fraction of cells in different phenotypes ., For simplicity , we focus on the polarized and oscillatory phenotypes which can be most clearly characterized mathematically ., The following experimental observations are used to test and compare our distinct models of cell signaling dynamics ., For a graphical summary of cell phenotypes and experimental observations , see Fig 1 ., We discuss three model variants , each composed of ( A ) a subsystem endowed with bistability , and ( B ) a subsystem responsible for negative feedback ., In short , Model 1 assumes ECM competition for ( A ) and feedbacks mediated by GTPases for ( B ) ., In contrast , in Model 2 we assume GTPase dynamics for ( A ) and ECM mediated feedbacks for ( B ) ., Model 3 resembles Model 2 , but further assumes limited total pool of each GTPase ( conservation ) , which turns out to be a critical feature ., See Tables 1 and 2 for details ., We analyze each model variant as follows: first , we determine ( bi\/mono ) stable regimes of subsystem ( A ) in isolation , using standard bifurcation methods ., Next , we parameterize subsystem ( B ) so that its slow negative feedback generates oscillations when ( A ) and ( B ) are coupled in the model as a whole ., For this to work , ( B ) has to force ( A ) to transition from one monostable steady state to the other ( across the bistable regime ) as shown in the relaxation loop of Fig 2d ., This requirement informs the magnitude of feedback components ., Although these considerations do not fully constrain parameter choices , we found it relatively easy to then parameterize the models ( particularly Models 1b and 3 ) ., This implies model robustness , and suggests that broad regions of parameter space lead to behavior that is consistent with experimental observations ., Parameters associated with rates of activation and\/or feedback strengths are summarized in the S1 Text ., The parameters \u03b3i represent the strengths of feedbacks 1 or 2 in Fig 2, ( b ) and 2, ( c ) ., \u03b3R controls the positive feedback ( 2 ) of Rac ( via lamellipod spreading ) on ECM signaling , and \u03b3\u03c1 represents the magnitude of negative feedback ( 1 ) from Rho to ECM signaling ( due to lamellipod contraction ) ., \u03b3E controls the strength of ECM activation of Rho in both feedbacks ( 1 ) and ( 2 ) ., When these feedbacks depend on cell state variables , we typically use Hill functions with magnitude \u03b3i , or , occasionally , linear expressions with slopes \u03b3 \u00af i ., ( These choices are distinguished by usage of overbar to avoid confusing distinct units of the \u03b3\u2019s in such cases . ), Experimental manipulations in 11 ( described in Section \u201cExperimental observations constraining the models\u201d ) can be linked to the following parameter variations ., In view of this correspondence between model parameters and experimental manipulations , our subsequent analysis and bifurcation plots will highlight the role of feedback parameters \u03b3R , \u03c1 , E in the predictions of each model ., Rather than exhaustively mapping all parameters , our goal is to use 1 and 2-parameter bifurcation plots with respect to these parameters to check for ( dis ) agreement between model predictions and experimental observations ( O1\u2013O3 ) ., This allows us to ( in ) validate several hypotheses and identify the eventual model ( the Hybrid , Model 3 ) and set of hypotheses that best account for observations ., We first investigated the possibility that lamellipod competition is responsible for bistability and that GTPases interactions create negative feedback that drives the oscillations observed in some cells ., To explore this idea , we represented the interplay between lamellipodia ( e . g . , competition for growth due to membrane tension or volume constraints ) , using an elementary Lotka-Volterra ( LV ) competition model ., For simplicity , we assume that AE , LE depend linearly on Rac and Rho concentration , and set BE = 0 ., ( This simplifies subsequent analysis without significantly affecting qualitative conclusions . ), With these assumptions , the ECM Eq ( 3c ) reduce to the well-known LV species-competition model ., First consider Eq ( 3c ) as a function of parameters ( AE , LE ) , in isolation from GTPase input ., As in the classical LV system 45 , competition gives rise to coexistence , bistability , or competitive exclusion , the latter two associated with a polarized cell ., These regimes are indicated on the parameter plane of Fig 3a with the ratios of contractile ( LE ) and protrusive ( AE ) strengths in each lamellipod as parameters ., ( In the full model , these quantities depend on Rac and Rho activities; the ratios LE ( \u03c1k ) \/AE ( Rk ) for lamellipod k = 1 , 2 lead to aggregate parameters that simplify this figure ., ) We can interpret the four parameter regimes in Fig 3a as follows: I ) a bistable regime: depending on initial conditions , either lamellipod \u201cwins\u201d the competition ., II ) Lamellipod 1 always wins ., III ) Lamellipod 2 always wins ., IV ) Lamellipods 1 and 2 coexist at finite sizes ., Regimes I-III represent strongly polarized cells , whereas IV corresponds to an unpolarized ( or weakly polarized ) cell ., We next asked whether , and under what conditions , GTPase-mediated feedback could generate relaxation oscillations ., Such dynamics could occur provided that slow negative feedback drives the ECM subsystem from an E1-dominated state to an E2-dominated state and back ., In Fig 3a , this correspond to motion along a path similar to one labeled, ( d ) in Panel ( a ) , with the ECM subsystem circulating between Regimes II and III ., This can be accomplished by GTPase feedback , since both Rho and Rac modulate LE ( contractile strength ) and AE ( protrusion strength ) ., We show this idea more explicitly in Fig 3, ( c ) \u20133, ( e ) by plotting E1 vs LE1 while keeping LE1 + LE2 constant ., ( Insets similarly show E2 vs LE1 . ), Each of Panels ( c-e ) corresponds to a 1-parameter bifurcation plot along the corresponding path labeled ( c-e ) in Panel ( a ) ., We find the following possible transitions: In Fig 3c , we find two distinct polarity states: either E1 or E2 dominate while the other is zero regardless of the value of LE1; a transition between such states does not occur ., In Fig 3d , there is a range of values of LE1 with coexisting stable low and high E1 values ( bistable regime ) flanked by regimes where either the lower or higher state loses stability ( monostable regimes ) ., As indicated by the superimposed loop , a cycle of protrusion ( green ) and contraction ( blue ) could then generate a relaxation oscillation as the system traverses its bistable regime ., In Fig 3e , a third possibility is that the system transits between E1-dominated , coexisting , and E2-dominated states ., In brief , for oscillatory behavior , GTPase feedback should drive the ECM-subsystem between regimes I , II , and III without entering regime IV ., Informed by this analysis , we next link the bistable ECM submodel to a Rac-Rho system ., To ensure that the primary source of bistability is ECM dynamics , a monostable version of the Rac-Rho sub-system is adopted by setting n = 1 in the GTPase activation terms AR , A\u03c1 in Eqs ( 3a ) and ( 3b ) ., We consider three possible model variants ( 1a-1c ) for the full ECM \/ GTPase model ., In view of the conclusions thus far , we now explore the possibility that bistability stems from mutual antagonism between Rac and Rho , rather than lamellipod competition ., To do so , we chose Hill coefficients n = 3 in the rates of GTPase activation , AR , A\u03c1 ., We then assume that ECM signaling both couples the lamellipods and provides the requisite slow negative feedback ., Here we consider the case that GTPases are abundant , so that the levels of inactive Rac and Rho ( RI , \u03c1I ) are constant ., We first characterize the GTPase dynamics with bR , \u03c1 as parameters ., Subsequently , we include ECM signaling dynamics and determine how the feedback drives the dynamics in the ( bR , b\u03c1 ) parameter plane ., Isolated from the ECM influence , each lamellipod is independent so we only consider the properties of GTPase signaling in one ., This mutually antagonistic GTPase submodel is the well-known \u201ctoggle switch\u201d 50 that has a bistable regime , as shown in the ( bR , b\u03c1 ) plane of Fig 4a ., ECM signaling affects the Rac \/ Rho system only as an input to b\u03c1 ., A linear dependence of b\u03c1 on Ek failed to produce an oscillatory parameter regime , so we used a nonlinear Hill type dependence with basal and saturating components ., Furthermore , for GTPase influence on ECM signaling we use Hill functions for the influence of Rho ( in LE ) and Rac ( in BE ) on protrusion and contraction ., We set AE = 0 in this model for simplicity ., ( Nonzero AE can lead to compounded ECM bistability that we here do not consider . ), Given the structure of the b\u03c1 \u2212 bR parameter plane and the fact that ECM signaling variables only influence b\u03c1 , we can view oscillations as periodic cycles of contraction and protrusion forming a trajectory along one of horizontal dashed lines in Fig 4a ., This idea guides our parametrization of the model ., We select a value of bR that admits a bistable range of b\u03c1 in Fig 4a ., Next we choose maximal and minimal values of the function b\u03c1 ( EK ) that extend beyond the borders of the bistable range ., This choice means that the system transitions from the high Rac \/ low Rho state to the low Rac \/ high Rho state over each of the cycles of its oscillation ., With this parametrization , we find oscillatory dynamics , as shown in Fig 4b ., We now consider the two-lamellipod system with the above GTPase module in each lamellipod; we challenge the full model with experimental observations ., Since each lamellipod has a unique copy of the Rac-Rho module , ECM signaling provides the only coupling between the two lamellipods ., First , we observed that inhibition of ROCK ( reduction of \u03b3\u03c1 in Fig 4b ) suppress oscillations ., However the resulting stationary state is non-polar , in contrast to experimentally observed increase in the fraction of polarized cells ( O1 ) ., We adjusted the coupling strength ( lc ) to ensure that this disagreement was not merely due to insufficient coupling between the two lamellipods ., While an oscillatory regime persists , the discrepancy with ( O1 ) is not resolved: the system oscillates , but inhibiting ROCK gives rise to a non-polarized stationary state , contrary to experimental observations ., Yet another problematic feature of the model is its undue sensitivity to the strength of Rac activation ( bR ) ., This is evident from a comparison of the dashed lines in Fig 4a ., A small change in bR ( vertical shift ) dramatically increases the range of bistability ( horizontal span ) , and hence the range of values of b\u03c1 to be traversed in driving oscillations ., This degree of sensitivity seems inconsistent with biological behavior ., It is possible that an alternate formulation of the model ( different kinetic terms or different parametrization ) might fix the discrepancies noted above , so we avoid ruling out this scenario altogether ., In our hands , this model variant failed ., However a simple augmentation , described below , addresses all deficiencies , and leads to the final result ., In our third and final step , we add a small but significant feature to the bistable GTPase model to arrive at a working variant that accounts for all observations ., Keeping all equations of Model 2 , we merely drop the assumption of unlimited Rac and Rho ., We now require that the total amount of each GTPase be conserved in the cell ., This new feature has two consequences ., First , lamellipods now compete not only for growth , but also for limited pools of Rac and Rho ., This , along with rapid diffusion of inactive GTPases across the cell 30 , 31 , 51 provides an additional global coupling of the two lamellipods ., This seemingly minor revision produces novel behavior ., We proceed as before , first analyzing the GTPase signaling system on its own ., With conservation , the bR \u2212 b\u03c1 plane has changed from its previous version ( Fig 4a for Model 2 ) to Fig 5a ., For appropriate values of bR , there is a significant bistable regime in b\u03c1 ., Indeed , we find three regimes of behavior as the contractile strength in lamellipod k , b\u03c1 ( Ek ) , varies: a bistable regime where polarity in either direction is possible , a regime where lamellipod j \u201cwins\u201d ( Ej > Ek , left of the bistable regime ) , and a regime where lamellipod k \u201cwins\u201d ( right of the bistable regime ) ., Only polarity in a single direction is possible on either side of the bistable regime ., As in Model 2 , we view oscillations in the full model as cycles of lamellipodial protrusion and contraction that modify b\u03c1 ( Ek ) over time , and result in transitions between the three polarity states ., To parameterize the model , we repeat the process previously described ( choose a value of bR consistent with bistability , then choose the dependence of b\u03c1 on ECM signaling so as to traverse that entire bistable regime . ) We couple the GTPase system with ECM equations as before ., We then check for agreement with observations ( O1\u2013O3 ) ., As shown in Fig 5, ( e ) and 5, ( f ) , the model produces both polarized and oscillatory solutions ., To check consistency with experiments , we mapped the dynamics of this model with respect to both ROCK mediated contraction and PI3K mediated protrusion ( Fig 5c ) ., Inhibiting ROCK ( Fig 5b , decreasing \u03b3\u03c1 ) results in a transition from oscillations to polarized states , consistent with ( O1 ) ., PI3K upregulation promotes oscillations ( increasing \u03b3R , Fig 5c ) , characteristic of the more invasive cell line 1205Lu ( consistent with O2 ) ., Finally , increased fibronectin density ( increased \u03b3E , Fig 5d ) also promotes oscillations , consistent with ( O3 ) ., We conclude that this Hybrid Model can account for polarity and oscillations , and that it is consistent with the three primary experimental observations ( O1\u20133 ) ., Finally , Model 3 can recapitulate such observations with more reasonable timescales for GTPase and ECM dynamics than were required for Model variant 1b ., It is apparent that Model 3 contains two forms of lamellipodial coupling: direct ( mechanical ) competition and competition for the limited pools of inactive Rac and Rho ., While the former is certain to be an important coupling in some contexts or conditions 52 , we find that it is dispensable in this model ( e . g , see lc = 0 in Fig 5c ) ., We comment about the effect of such coupling in the Discussion ., In the context of this final model , we also tested the effect of ECM activation of Rac ( in addition to the already assumed effect on Rho activation ) ., As shown in Fig 5d ( dashed curves ) , the essential bifurcation structure is preserved when this modification is incorporated ( details in the S1 Text , and implications in the Discussion ) ., To summarize , Model 1b was capable of accounting for all observations , but required conservation of GTPase to do so ., This model was however rejected due to unreasonable time scales needed to give rise to oscillations ., Model 2 could account for oscillations with appropriate timescales , but it appears to be highly sensitive to parameters and , in our hands , inconsistent with experimental observations ., Model 3 , which combines the central features of Models 1b and 2 , has the right mix of timescales , and agrees with experimental observations ., In that final Hybrid Model , ECM based coupling ( lc ) due to mechanical tension or competition for other resources is not essential , but its inclusion makes oscillations more prevalent ( Fig 5b and 5e ) ., Furthermore , in this Hybrid Model , we identify two possible negative feedback motifs , shown in Fig 2b ., These appear to work cooperatively in promoting oscillations ., As we have argued , feedbacks are tuned so that ECM signaling spans a range large enough that b\u03c1 ( Ek ) traverses the entire bistable regime ( Fig 5a ) ., This is a requirement for the relaxation oscillations schematically depicted in Fig 2c ., Within an appropriate set of model parameters , either feedback could , in principle , accomplish this ., Hence , if Feedback 1 is sufficiently strong , Feedback 2 is superfluous and vice versa ., Alternatively , if neither suffices on its own , the combination of both could be sufficient to give rise to oscillations ., Heterogeneity among these parameters could thus be responsible for the fact that in ROCK inhibition experiments ( where Feedback 1 is essentially removed ) , most but not all cells transition to the persistent polarity phenotype ., The Hybrid Model ( Model 3 ) is consistent with observations O1\u2013O3 ., We can now challenge it with several further experimental tests ., In particular , we make two predictions ., Migrating cells can exhibit a variety of behaviors ., These behaviors can be modulated by the cell\u2019s internal state , its interactions with the environment , or mutations such as those leading to cancer progression ., In most cases , the details of mechanisms underlying a specific behavior , or leading to transitions from one phenotype to another are unknown or poorly understood ., Moreover , even in cases where one or more defective proteins or genes are known , the complexity of signaling networks make it difficult to untangle the consequences ., Hence , using indirect observations of cell migration phenotypes to elucidate the properties of underlying signaling modules and feedbacks are , as argued here , a useful exercise ., Using a sequence of models and experimental observations ( O1\u2013O3 ) we tested several plausible hypotheses for melanoma cell migration phenotypes observed in 11 ., By so doing , we found that GTPase dynamics are fundamental to providing, 1 ) bistability associated with polarity and, 2 ) coupling between competing lamellipods to select a single \u201cfront\u201d and \u201crear\u201d ., ( This coupling is responsible for the antiphase lamellipodial oscillations ) ., Further , slow feedback between GTPase and ECM signaling resulting from contraction and protrusion generate oscillations similar those observed experimentally ., The single successful model , Hybrid Model ( Model 3 ) , is essentially a relaxation oscillator ., Mutual inhibition between the limited pools of Rac and Rho , sets up a primary competition between lamellipods that produces a bistable system with polarized states pointing in opposite directions ., Interactions between GTPase dynamics and ECM signaling provide the negative feedback required to flip this system between the two polarity states , generating oscillations for appropriate parameters ., Results of Model 3 are consistent with observations ( O1\u2013O3 ) , and lead to predictions ( P1\u2013P2 ) , that are also confirmed by experimental observations 11 ., In 11 , it is further shown that the fraction of cells exhibiting each of these behaviors can be quantitatively linked to heterogeneity in the ranges of parameters representing the cell populations in the model\u2019s parameter space ., In our models , we assumed that the dominant effect of ECM signaling input is to activate Rho , rather than Rac ., In reality , both GTPases are likely activated to some extent in a cell and environment-dependent manner 41 , 42 ., We can incorporate ECM activation of Rac by amending the term AR so that its magnitude is dependent on ECM signaling ( Ek ) ., Doing so results in a shift in the borders of regimes we have indicated in Fig 5d ( dashed versus solid borders , see S1 Text for more details ) ., So long as Rho activation is the dominant effect , this hardly changes the qualitative results ., As the strength of feedback onto Rac strengthens , however , the size of the oscillatory regime is reduced ., Thus if feedback onto Rac dominates , loss of oscillations would be predicted ., This is to be expected based on the structure of these interactions ., Where ECM \u2192 Rho mediates a negative feedback , ECM \u2192 Rac mediates a positive feedback , which would be expected to suppress oscillatory behavior ., Thus while the ECM likely mediates multiple signaling feedbacks , this modeling suggest feedback onto Rho is most consistent with observations ., We have argued that conservation laws ( fixed total amount of Rac and fixed total amount of Rho ) in the cell plays an important role in the competition between lamellipods ., Such conservation laws are also found to be important in a number of other settings ., Fully spatial ( PDE ) modeling of GTPase function has shown that conservation significantly alters signaling dynamics 27 , 31 , 54 ., In 55 , it was shown that stochastically initiated hot spots of PI3K appeared to be globally coupled , potentially through a shared and conserved cytoplasmic pool of a signaling regulator ., Conservation of MIN proteins , which set up a standing wave oscillation during bacterial cell division , has been shown to give rise to a new type of Turing instability 56 ., Finally , interactions between conserved GTPase and negative regulation from F-actin in a PDE model was shown to give rise to a new type of conservative excitable dynamics 46 , 47 , which have been linked to the propagation of actin waves 57 ., These results provide interesting insights into the biology of invasive cancer cells ( in melanoma in particular ) , and shed light onto the mechanisms underlying the extracellular matrix-induced polarization and migration of normal cells ., First , they illustrate that diverse polarity and migration patterns can be captured within the same modeling framework , laying the foundation for a better understanding of seemingly unrelated and diverse behaviors previously reported ., Second , our results present a mathematical and computational platform that distills the critical aspects and molecular regulators in a complex signaling cascade; this platform could be used to identify promising single molecule and molecular network targets for possible clinical intervention .","headings":"Introduction, Results, Discussion","abstract":"Protrusion and retraction of lamellipodia are common features of eukaryotic cell motility ., As a cell migrates through its extracellular matrix ( ECM ) , lamellipod growth increases cell-ECM contact area and enhances engagement of integrin receptors , locally amplifying ECM input to internal signaling cascades ., In contrast , contraction of lamellipodia results in reduced integrin engagement that dampens the level of ECM-induced signaling ., These changes in cell shape are both influenced by , and feed back onto ECM signaling ., Motivated by experimental observations on melanoma cells lines ( 1205Lu and SBcl2 ) migrating on fibronectin ( FN ) coated topographic substrates ( anisotropic post-density arrays ) , we probe this interplay between intracellular and ECM signaling ., Experimentally , cells exhibited one of three lamellipodial dynamics: persistently polarized , random , or oscillatory , with competing lamellipodia oscillating out of phase ( Park et al . , 2017 ) ., Pharmacological treatments , changes in FN density , and substrate topography all affected the fraction of cells exhibiting these behaviours ., We use these observations as constraints to test a sequence of hypotheses for how intracellular ( GTPase ) and ECM signaling jointly regulate lamellipodial dynamics ., The models encoding these hypotheses are predicated on mutually antagonistic Rac-Rho signaling , Rac-mediated protrusion ( via activation of Arp2\/3 actin nucleation ) and Rho-mediated contraction ( via ROCK phosphorylation of myosin light chain ) , which are coupled to ECM signaling that is modulated by protrusion\/contraction ., By testing each model against experimental observations , we identify how the signaling layers interact to generate the diverse range of cell behaviors , and how various molecular perturbations and changes in ECM signaling modulate the fraction of cells exhibiting each ., We identify several factors that play distinct but critical roles in generating the observed dynamic: ( 1 ) competition between lamellipodia for shared pools of Rac and Rho , ( 2 ) activation of RhoA by ECM signaling , and ( 3 ) feedback from lamellipodial growth or contraction to cell-ECM contact area and therefore to the ECM signaling level .","summary":"Cells crawling through tissues migrate inside a complex fibrous environment called the extracellular matrix ( ECM ) , which provides signals regulating motility ., Here we investigate one such well-known pathway , involving mutually antagonistic signalling molecules ( small GTPases Rac and Rho ) that control the protrusion and contraction of the cell edges ( lamellipodia ) ., Invasive melanoma cells were observed migrating on surfaces with topography ( array of posts ) , coated with adhesive molecules ( fibronectin , FN ) by Park et al . , 2017 ., Several distinct qualitative behaviors they observed included persistent polarity , oscillation between the cell front and back , and random dynamics ., To gain insight into the link between intracellular and ECM signaling , we compared experimental observations to a sequence of mathematical models encoding distinct hypotheses ., The successful model required several critical factors ., ( 1 ) Competition of lamellipodia for limited pools of GTPases ., ( 2 ) Protrusion \/ contraction of lamellipodia influence ECM signaling ., ( 3 ) ECM-mediated activation of Rho ., A model combining these elements explains all three cellular behaviors and correctly predicts the results of experimental perturbations ., This study yields new insight into how the dynamic interactions between intracellular signaling and the cell\u2019s environment influence cell behavior .","keywords":"cell physiology, cell motility, engineering and technology, enzymes, signal processing, biological cultures, enzymology, cell polarity, developmental biology, gtpase signaling, cell cultures, melanoma cells, cellular structures and organelles, research and analysis methods, extracellular matrix signaling, proteins, extracellular matrix, guanosine triphosphatase, biochemistry, signal transduction, hydrolases, cell biology, cell migration, biology and life sciences, cultured tumor cells, cell signaling","toc":null} +{"Unnamed: 0":1170,"id":"journal.pcbi.1002932","year":2013,"title":"A Protein Turnover Signaling Motif Controls the Stimulus-Sensitivity of Stress Response Pathways","sections":"Eukaryotic cells must constantly recycle their proteomes ., Of the approximately 109 proteins in a typical mouse L929 fibrosarcoma cell , 106 are degraded every minute 1 ., Assuming first-order degradation kinetics , this rate of constitutive protein turnover , or flux , imposes an average half-life of 24 hours ., Not all proteins are equally stable , however ., Genome-wide quantifications of protein turnover in HeLa cells 2 , 3 and 3T3 murine fibroblasts 4 show that protein half-lives can span several orders of magnitude ., Thus while some proteins exist for months and even years 5 , others are degraded within minutes ., Gene ontology terms describing signaling functions are highly enriched among short-lived proteins 3 , 6 , 7 , suggesting that rapid turnover is required for proper signal transduction ., Indeed , defects in protein turnover are implicated in the pathogenesis of cancer and other types of human disease 8 , 9 ., Conspicuous among short-lived signaling proteins are those that regulate the p53 and NF\u03baB stress response pathways ., The p53 protein itself , for example , has a half-life of less than 30 minutes 10 , 11 ., Mdm2 , the E3 ubiquitin ligase responsible for regulating p53 , has a half-life of 45 minutes 4 ., And the half-life of unbound I\u03baB\u03b1 , the negative feedback regulator of NF\u03baB , is less than 15 minutes 12 , 13 ( see Figure S1 ) , requiring that 6 , 500 new copies of I\u03baB\u03b1 be synthesized every minute 13 ., Given the energetic costs of protein synthesis , we hypothesized that rapid turnover of these proteins is critical to the stimulus-response behavior of their associated pathways ., To test our hypothesis we developed a method to systematically alter the rates of protein turnover in mass action models without affecting their steady state abundances ., Our method requires an analytical expression for the steady state of a model , which we derive using the py-substitution method described in a companion manuscript ., From this expression , changes in parameter values that do not affect the steady state are found in the null space of the matrix whose elements are the partial derivatives of the species abundances with respect to the parameters ., We call this vector space the isostatic subspace ., After deriving a basis for this subspace , linear combinations of basis vectors identify isostatic perturbations that modify specific reactions independently of all the others , for example those that control protein turnover ., By systematic application of these isostatic perturbations to a model operating at steady state , the effects of flux on stimulus-responsiveness can be studied in isolation of changes to steady-state abundances ( see Methods ) ., We first apply our method to a prototypical negative feedback module in which an activator controls the expression of its own negative regulator ., We show that reducing the flux of either the activator or its inhibitor slows the response to stimulation ., However , reducing the flux of the activator lowers the magnitude of the response , whereas reducing the flux of the inhibitor increases it ., This complementarity allows the activator and inhibitor fluxes to exert precise control over the modules response to stimulation ., Given this level of control , we hypothesized that rapid turnover of p53 and Mdm2 must be required for p53 signaling ., A hallmark of p53 is that it responds to DNA damage in a series of digital pulses 14\u201318 ., These pulses are important for determining cell fate 19\u201321 ., To test whether high p53 and Mdm2 flux are required for p53 pulses , we applied our method to a model in which exposure to ionizing radiation ( IR ) results in oscillations of active p53 17 ., By varying each flux over three orders of magnitude , we show that high p53 flux is indeed required for oscillations ., In contrast , high Mdm2 flux is not required , but rather controls the refractory time in response to transient stimulation ., If the flux of Mdm2 is low , a second stimulus after 22 hours does not result in appreciable activation of p53 ., In contrast to p53 , the flux of NF\u03baB turnover is very low , while the flux of its inhibitor , I\u03baB , is very high ., Prior to stimulation , most NF\u03baB is sequestered in the cytoplasm by I\u03baB ., Upon stimulation by an inflammatory signal like tumor necrosis factor alpha ( TNF ) , I\u03baB is phosphorylated and degraded , resulting in rapid but transient translocation of NF\u03baB to the nucleus and activation of its target genes 22\u201324 ., Two separate pathways are responsible for the turnover of I\u03baB 12 ., In one , I\u03baB bound to NF\u03baB is phosphorylated by the I\u03baB kinase ( IKK ) and targeted for degradation by the ubiquitin-proteasome system ., In the other pathway , unbound I\u03baB is targeted for degradation and requires neither IKK nor ubiquitination 25 , 26 ., We call these the \u201cproductive\u201d and \u201cfutile\u201d fluxes , respectively ., Applying our method to a model of NF\u03baB activation , we show that the futile flux acts as a negative regulator of NF\u03baB activation while the productive flux acts as a positive regulator ., We find that turnover of bound I\u03baB is required for NF\u03baB activation in response to TNF , while high turnover of unbound I\u03baB prevents spurious activation of NF\u03baB in response to low doses of TNF or ribotoxic stress caused by ultraviolet light ( UV ) ., As with p53 then , juxtaposition of a positive and negative regulatory flux govern the sensitivity of NF\u03baB to different stimuli , and may constitute a common signaling motif for controlling stimulus-specificity in diverse signaling pathways ., To examine the effects of flux on stimulus-responsiveness , we built a prototypical negative feedback model reminiscent of the p53 or NF\u03baB stress-response pathways ( Figure 1A ) ., In it , an activator \u201cX\u201d is constitutively expressed but catalytically degraded by an inhibitor , \u201cY\u201d ., The inhibitor is constitutively degraded but its synthesis requires X . Activation is achieved by instantaneous depletion of Y , the result of which is accumulation of X until negative feedback forces a return to steady state ., The dynamics of this response can be described by two values: , the amplitude or maximum value of X after stimulation , and , the time at which is observed ( Figure 1B ) ., Parameters for this model were chosen such that the abundances of both X and Y are one arbitrary unit and X achieves its maximum value of at time , where the units of time are also arbitrary ., To address the role of these parameters in shaping the response of the activator , we first performed a traditional sensitivity analysis ., We found that increasing the synthesis of X ( Figure 1C ) , or decreasing the degradation of X ( Figure 1D ) or the synthesis of Y ( Figure 1E ) , all result in increased responsiveness ., However , these changes also increase the abundance of X . To distinguish between the effects caused by changes in flux and those caused by changes in abundance , we developed a method that alters the flux of X and Y while maintaining their steady state abundances at ., Using this method , we found that increasing the flux of X increases responsiveness ( Figure 1G ) , but not to the same extent as increasing the synthesis parameter alone ( Figure 1C ) ., In contrast , reducing the flux of Y yields the same increase in responsiveness as decreasing the synthesis of Y ( Figure 1E ) or the degradation of X ( Figure 1D ) ., These observations suggest that both the flux and abundance of X are important regulators of the response , as is the flux of Y , but not its abundance ., This conclusion is supported by the observation that when the abundance of Y is increased by reducing its degradation , there is little effect on signaling ( Figure 1F ) ., To further characterize the effects of flux on the activators response to stimulation , we applied systematic changes to the fluxes of X and Y prior to stimulation and plotted the resulting values of and ., Multiplying the flux of X over the interval showed , as expected , that the value of increases while the value of deceases ( Figure 2A ) ., In other words , a high activator flux results in a strong , fast response to stimulation ., If we repeat the process with the inhibitor , we find that both and decrease as the flux increases; a high inhibitor flux results in a fast but weak response ( Figure 2B ) ., This result illustrates that fluxes of different regulators can have different but complementary effects on stimulus-induced signaling dynamics ., Complementarity suggests that changes in flux can be identified such that is altered independently of , or independently of ., Indeed , if both activator and inhibitor fluxes are increased in equal measure , is held fixed while the value of decreases ( Figure 2C ) ., Increasing both fluxes thus simultaneously reduces the timescale of the response without affecting its magnitude ., An equivalent relationship can be found such that remains fixed while is affected ( Figure 2D ) ., Because an increase in either flux will reduce , to alter without affecting requires an increase in one flux but a decrease in the other ., Also , is more sensitive to changes in the inhibitor flux versus the activator flux; small changes in the former must be paired with larger changes in the latter ., This capability to achieve any value of or indicates that flux can precisely control the response to stimulation , without requiring any changes to steady state protein abundance ., Given that flux precisely controls the dynamic response to stimulation in a prototypical signaling module , we hypothesized that for p53 , oscillations in response to DNA damage require the high rates of turnover reported for p53 and Mdm2 ., To test this , we applied our method to a published model of p53 activation in response to ionizing gamma radiation ( IR ) , a common DNA damaging agent ( Figure 3A ) 17 ., Because the model uses arbitrary units , we rescaled it so that the steady state abundances of p53 and Mdm2 , as well as their rates of synthesis and degradation , matched published values ( see Table S1 ) ., We note that these values are also in good agreement with the consensus parameters reported in 16 ., Next we implemented a multiplier of Mdm2-independent p53 flux and let it take values on the interval ., For each value we simulated the response to IR using a step function in the production of the upstream Signal molecule , , as previously described 17 ., To characterize the p53 response we let be the amplitude of stable oscillations in phosphorylated p53 ( Figure 3B ) , and use this as a metric for p53 sensitivity ., Where , we say the module is sensitive to IR stimulation ., We find that is greater than zero only when the flux of p53 is near its observed value or higher ( Figure 4A ) ., If the flux of p53 is reduced by 2-fold or more , p53 no longer stably oscillates in response to stimulation , but exhibits damped oscillations instead ., Interestingly , repeating this analysis with a multiplier for the Mdm2 flux over the same interval reveals that Mdm2 flux has little bearing on p53 oscillations ( Figure 4B ) ., For any value of the multiplier chosen , ., As with p53 , this multiplier alters the Signal-independent flux of Mdm2 but does not affect Signal-induced Mdm2 degradation ., If oscillations are already compromised by a reduced p53 flux , no concomitant reduction in Mdm2 flux can rescue the oscillations ( Figure 4C ) ., We therefore conclude that the flux of p53 , but not Mdm2 , is required for IR-sensitivity in the p53 signaling module ., What then is the physiological relevance of high Mdm2 flux ?, In the model , signal-mediated Mdm2 auto-ubiquitination 27 is a major contributor to Mdm2 degradation after stimulation ., If Signal production is transient , Mdm2 protein levels must be restored solely via Signal-independent degradation ., We therefore hypothesized that if the flux of Mdm2 is low , Mdm2 protein levels would remain elevated after stimulation and compromise sensitivity to subsequent stimuli ., To test this hypothesis we again let the Mdm2 flux multiplier take values over the interval ., For each value we stimulated the model with a 2-hour pulse of Signal production , followed by 22 hours of rest , followed by a second 2-hour pulse ( Figure 3B ) ., We defined to be the amplitude of the first peak of phosphorylated p53 and to be the amplitude of the second peak ., Sensitivity to the second pulse is defined as the difference between and , with indicating full sensitivity ., As seen in Figures 4D and E , the flux of p53 has no bearing on the sensitivity to the second pulse while the flux of Mdm2 strongly affects it ., At one one-hundredth the observed Mdm2 flux \u2013 corresponding to protein half-life of 3-days \u2013 over 20 , 000 fewer molecules of p53 are phosphorylated , representing more than a two-fold reduction in sensitivity ( Figure 4E ) ., This result is robust with respect to the interval of time chosen between pulses ( Figure S2 ) ., If the sensitivity to the second pulse is already compromised by a reduced Mdm2 flux , a concomitant reduction in p53 flux fails to rescue it , while an increase in p53 flux still further reduces it ( Figure 4F ) ., We therefore conclude that the flux of Mdm2 , and not p53 , controls the systems refractory time , and a high Mdm2 flux is required to re-establish sensitivity after transient stimulation ., A second major stress-response pathway is that of NF\u03baB ., NF\u03baB is potently induced by the inflammatory cytokine TNF , but shows a remarkable resistance to internal metabolic perturbations or ribotoxic stresses induced by ultraviolet light ( UV ) 13 , or to triggers of the unfolded protein response ( UPR ) 28 ., Like p53 , the dynamics of NF\u03baB activation play a major role in determining target gene expression programs 29 , 30 ., Although NF\u03baB is considered stable , the flux of I\u03baB\u03b1 \u2013 the major feedback regulator of NF\u03baB \u2013 is conspicuously high ., We hypothesized that turnover of I\u03baB controls the stimulus-responsiveness of the NF\u03baB signaling module ., Beginning with a published model of NF\u03baB activation 13 , we removed the beta and epsilon isoforms of I\u03baB , leaving only the predominant isoform , I\u03baB\u03b1 ( hereafter , simply \u201cI\u03baB\u201d; Figure 5A ) ., Steady state analysis of this model supported the observation that almost all I\u03baB is degraded by either of two pathways: a \u201cfutile\u201d flux , in which I\u03baB is synthesized and degraded as an unbound monomer; and a \u201cproductive\u201d flux , in which free I\u03baB enters the nucleus and binds to NF\u03baB , shuttles to the cytoplasm , then binds to and is targeted for degradation by IKK ( Figure 5B ) ., These two pathways account for 92 . 5% and 7 . 3% of the total I\u03baB flux , respectively ., The inflammatory stimulus TNF was modeled as before , using a numerically-defined IKK activity profile derived from in vitro kinase assays 30 ( Figure 5A , variable ) ., Stimulating with TNF results in strong but transient activation of NF\u03baB ., A second stimulus , ribotoxic stress induced by UV irradiation , was modeled as 50% reduction in translation and results in only modest activity 13 ., As above , we let be the amplitude of activated NF\u03baB in response to TNF and the time at which is observed ., Analogously , we let be the amplitude of NF\u03baB in response to UV , and the time at which NF\u03baB activation equals one-half ( see Figure 5C ) ., We then implemented multipliers for the futile and productive flux and let each multiplier take values on the interval ., For each value we simulated the NF\u03baB response to TNF and UV and plotted the effects on and ., The results show that reducing the productive flux yields a slower , weaker response to TNF ( Figure 6A ) ., By analogy to Figure 2 , this indicates that the productive flux of I\u03baB is a positive regulator of NF\u03baB activation ., In contrast , the futile flux acts as a negative regulator of NF\u03baB activity , though its effects on and are more modest ( Figure 6B ) ., Thus , similar to p53 , the activation of NF\u03baB is controlled by a positive and negative regulatory flux ., In response to UV , a reduction in either flux delays NF\u03baB activation , but reducing the futile flux results in a significant increase in while reducing the productive flux has almost no effect ( Figure 6C and D ) ., Conversely , while an increase in the futile flux has no effect on , an increase in the productive flux results in a significant increase ., If we now define NF\u03baB to be sensitive to TNF or UV when or are ten-fold higher than its active but pre-stimulated steady state abundance , then TNF sensitivity requires a productive flux multiplier , while UV insensitivity requires a productive flux multiplier and a futile flux multiplier ., This suggests that the flux pathways of I\u03baB may be optimized to preserve NF\u03baB sensitivity to external inflammatory stimuli while minimizing sensitivity to internal metabolic stresses ., In contrast to p53 , the negative regulatory flux of I\u03baB dominates the positive flux ., We hypothesized that this imbalance must affect the sensitivity of NF\u03baB to weak stimuli ., To test this hypothesis we generated dose-response curves for TNF and UV using the following multipliers for the futile flux: , , , and ( see Methods ) ., The results confirm that reducing the futile flux of I\u03baB results in hypersensitivity at low doses of TNF ( Figure 7 , Row 1 ) ., At one one-hundredth the wildtype flux , a ten-fold weaker TNF stimulus yields an equivalent NF\u03baB response to the full TNF stimulus at the wildtype flux ., Similarly , a high futile flux prevents strong activation of NF\u03baB in response to UV ( Figure 7 , Row 2 ) ., At and times the futile flux , UV stimulation results in a 20-fold increase in NF\u03baB activity , compared to just a 2-fold increase at the wildtype flux ., We therefore conclude that turnover of unbound I\u03baB controls the EC50 of the NF\u03baB signaling module , and that rapid turnover renders NF\u03baB resistant to metabolic and spurious inflammatory stimuli ., Previous studies have shown that the fluxes of p53 10 , 11 , its inhibitor Mdm2 31 , 32 , and the unbound negative regulator of NF\u03baB , I\u03baB 12 , are remarkably high ., To investigate whether rapid turnover of these proteins is required for the stimulus-response behavior of the p53 and NF\u03baB stress response pathways , we developed a computational method to alter protein turnover , or flux , independently of steady state protein abundance ., For p53 , we show that high flux is required for sensitivity to sustained stimulation after ionizing radiation ( Figure 4A ) ., Interestingly , inactivating mutations in p53 have long been known to enhance its stability 33 , either by interfering with Mdm2-catalyzed p53 ubiquitination 34 , 35 , or by affecting p53s ability to bind DNA and induce the expression of new Mdm2 36\u201339 ., Inactivation of p53 also compromises the cells sensitivity to IR 40 , 41\u201343 ., Our results offer an intriguing explanation for this phenomenon , that p53 instability is required for oscillations in response to IR ., Indeed , IR sensitivity was shown to correlate with p53 mRNA abundance 44\u201346 , a likely determinant of p53 protein flux ., In further support of this hypothesis , mouse embryonic fibroblasts lacking the insulin-like growth factor 1 receptor ( IGF-1R ) exhibit reduced p53 synthesis and degradation , but normal protein abundance ., These cells were also shown to be insensitive to DNA damage , caused by the chemotherapeutic agent etoposide 32 ., Like p53 , increased stability of Mdm2 has been observed in human leukemic cell lines 47 , and Mdm2 is a strong determinant of IR sensitivity 48 , 49 ., Again our results suggest these observations may be related ., Activation of p53 in response to IR is mediated by the ATM kinase ( \u201cSignal\u201d in Figure 3 ) 50 , 51 ., Batchelor et al . show that saturating doses of IR result in feedback-driven pulses of ATM , and therefore p53 17 ., In Figure 4B we show that these are independent of Mdm2 flux ., However , sub-saturating doses of IR ( 10 Gy versus 0 . 5 Gy ) 52 , 53 cause only transient activation of ATM 54 , after which constitutive Mdm2 synthesis is required to restore p53 sensitivity ( Figure 4E ) ., This suggests that high Mdm2 flux is required for sensitivity to prolonged exposure to sub-saturating doses of IR ., Indeed , this inverse relationship between flux and refractory time has been observed before ., In Ba\/F3 pro-B cells , high turnover of the Epo receptor maintains a linear , non-refractory response over a broad range of ligand concentrations 55 ., For NF\u03baB , our method revealed that an isostatic reduction in the half-life of I\u03baB sensitizes NF\u03baB to TNF ( Figure 7A ) , as well as to ribotoxic stress agents like UV ( Figure 7B ) ., This observation agrees with previous theoretical studies using a dual kinase motif , where differential stability in the effector isoforms can modulate the dynamic range of the response 56 ., For NF\u03baB , the flux of free I\u03baB acts as a kinetic buffer against weak or spurious stimuli , similar to serial post-translational modifications on the T cell receptor 57 , or complementary kinase-phosphatase activities in bacterial two-component systems 58 ., In contrast , increasing the half-life of I\u03baB\u03b1 alone \u2013 without a coordinated increase in its rate of synthesis \u2013 increases the abundance of free I\u03baB\u03b1 and actually dampens the activity of NF\u03baB in response to TNF 25 ., This difference highlights the distinction between isostatic perturbations and traditional , unbalanced perturbations that also affect the steady state abundances ., It also calls attention to a potential hazard when trying to correlate stimulus-responsiveness with protein abundance measurements: observed associations between responses and protein abundances do not rule out implied changes in kinetic parameters as the causal link ., Indeed static , and not kinetic measurements , are the current basis for molecular diagnosis of clinical specimens ., Thus while nuclear expression of p53 59\u201366 and NF\u03baB 67\u201369 have been shown to correlate with resistance to treatment in human cancer , the correlation is not infallible 40 , 70\u201374 ., If stimulus-responsiveness can be controlled by protein turnover independently of changes to steady state abundance , then correlations between abundance and a therapeutic response may be masked by isostatic heterogeneity between cells ., For p53 and NFkB , we show that stimulus sensitivity can be controlled by a paired positive and negative regulatory flux ., We propose that this pairing may constitute a common regulatory motif in cell signaling ., In contrast to other regulatory motifs 75 , 76 , the \u201cflux motif\u201d described here does not have a unique structure ., The positive p53 flux , for example , is formed by the synthesis and degradation of p53 itself , while the positive flux in the NF\u03baB system includes the nuclear import of free NF\u03baB and export of NF\u03baB bound to I\u03baB ., For p53 , the negative flux is formed by synthesis and degradation of Mdm2 , while for NF\u03baB it is formed by the synthesis , shuttling , and degradation of cytoplasmic and nuclear I\u03baB ., Thus the reaction structure for each flux is quite different , but they nevertheless form a regulatory motif that is common to both pathways ( Figure 8 ) ., And since the mathematical models used here are only abstractions of the underlying network , the true structure of the p53 and NF\u03baB flux motifs are in reality even more complex ., The identification of a flux motif that controls stimulus-responsiveness independently of protein abundances may prompt experimental investigation into the role of flux in signaling ., At a minimum , this could be achieved using fluorescently-labeled activator and inhibitor proteins in conjunction with tunable synthesis and degradation mechanisms ., The tet-responsive promoter system 77 , 78 , for example , could provide tunable synthesis , while the CLP-XP system 79 could provide tunable degradation ., For the two-dimensional analysis presented here , and to avoid confounding effects on signaling dynamics caused by shared synthesis and degradation machinery 80 , independently tunable synthesis and degradation mechanisms may be required ., If these techniques are applied to mutants lacking the endogenous regulators , this would further allow decreases in protein flux to be studied in addition to strictly increases ., Finally , in this study we have examined the effects of flux on stimulus-responsiveness , but in a typical signaling module , many other isostatic perturbations exist ., For example , the isostatic subspace of our NF\u03baB model has 18 dimensions , of which only a few were required by the analysis presented here ., By simultaneously considering all isostatic perturbations , some measure of the dynamic plasticity of a system can be estimated , perhaps as a function of its steady-state ., Such an investigation can inform diagnosis of biological samples , and whether information from a single , static observation is sufficient to predict the response to a particular chemical treatment , or whether live-cell measurements are required as well ., As we have shown that protein turnover can be a powerful determinant of stimulus-sensitivity , we anticipate that kinetic measurements will be useful predictors of sensitivity to chemical therapeutics ., To begin , we assume that the system of interest has been modeled using mass action kinetics and that the steady state abundance of every biochemical species is a known function of input parameters ., In other words , such that ( 1 ) Equation 1 is the well-known steady-state equation; is a vector of independent parameters and is the vector of species abundances ., We use an overbar to denote a vector that satisfies Equation 1 ., For excellent reviews on mass action models and their limitations , see 81\u201383 ., For a method on finding analytical solutions to the steady state equation , see our accompanying manuscript ., Next , we wish to find a change in the input parameters such that the resulting change in the species abundances is zero , where is defined as Thus for , we require that The right-hand side of this equation can be approximated by a truncated Taylor series , as follows:where is the Jacobian matrix whose elements are the partial derivatives of each species with respect to each parameter ., Thus , for we require that In other words , must lie in the null space of ., We call this the isostatic subspace of the model \u2013 parameter perturbations in this subspace will not affect any of the steady-state species abundances ., If lies within the isostatic subspace , it is an isostatic perturbation vector ., Let be a matrix whose columns form a basis for the isostatic subspace ., Then a general expression for an isostatic perturbation vector is simply ( 2 ) where is a vector of unknown basis vector coefficients ., Finally , Equation 2 can be solved for a specific linear combination of basis vectors that achieves the desired perturbation ., In our case we identified those combinations that result in changes to protein turnover ., Our prototypical negative feedback model consists of two species , an activator \u201cX\u201d and an inhibitor \u201cY\u201d , and four reactions , illustrated in Figure 1A ., Let denote the abundance of the activator and denote the abundance of the inhibitor ., An analytical expression for the steady-state of this model was identified by solving Equation 1 for the rates of synthesis , giving ( 3 ) ( 4 ) To parameterize the model we first let ., Degradation rate constants were then calculated such that at time , where again is the maximum amplitude of the response ., Activation was achieved by instantaneous reduction of to ., To modify the flux , we defined flux multipliers and such that and ., Note that by virtue of Equations 3 and 4 , values for and other than result in commensurate changes in and such that steady state is preserved ., See file \u201cpnfm . sci\u201d in Protocol S1 for details ., Figures 2A and 2B were achieved by letting and vary over the interval , then calculating the altered vector of rate constants and simulating the models response to stimulation ., Figure 2C required letting vary over this same interval while having ., Finally , Figure 2D was achieved by letting vary over the same interval , and for each value of , numerically calculating the value of that gave ., All species , reactions , and rate equations required by our model of p53 oscillations are as previously described 17 ., Our only modification was to scale the parameter values so that the rates of p53 and Mdm2 synthesis and degradation , as well as their steady-state abundances , matched published observations ( see Table S1 ) ., Specifically we let To derive a steady-state solution for this model , we solved Equation 1 for the steady-state abundance of Mdm2 and the rate of Mdm2-independent p53 degradation , giving To simulate the response to ionizing radiation we used the ( scaled ) stimulus given in 17 ., Namely , at time we let the rate of Signal production , , go to ., This stimulus was either maintained indefinitely ( Figure 4A\u2013C ) or for just 2 hours , followed by 22 hours of rest , followed by a second 2 hour stimulation ( Figure 4D\u2013F ) ., Changes in p53 or Mdm2 flux were achieved as above , by defining modifiers and such that ( 5 ) ( 6 ) ( 7 ) Prior to stimulation , we let one modifier take values on the interval while holding the other modifier constant ., Equations 6 and 7 ensure that the p53-independent flux of Mdm2 is modified without affecting its steady-state abundance ., Equation 5 , which is slightly more complicated , results in changes to the rate of Mdm2-independent p53 degradation , , by modifying the independent parameter , which controls the rate of p53 synthesis ., This yields the desired Numerical integration was carried out to time ., After each integration , we defined to be the minimum vertical distance between any adjacent peak and trough in phosphorylated p53 , and and to be the amplitudes of the first and second peak , respectively ., Details of this model can be found in the file \u201cp53b . sci\u201d in Protocol S1 ., For more information on the time delay parameters and , and their role in generating oscillations , see 84 , 85 ., Our model of NF\u03baB activation is similar to the one described in 13 , except the beta and epsilon isoforms of I\u03baB have been removed ., Our model has 10 species and 26 reactions , the majority of which are illustrated in Figure 5A ., Rate equations and parameter values are identical to those in 13 ., An analytical expression for the steady-state of this model was found by solving Equation 1 for the following dependent variables: , , , , and , and the rate constants , , and ., The precise expressions for these variables are extremely cumbersome but may be found in their entirety in the file \u201cnfkb . sci\u201d in Protocol S1 ., Activation of NF\u03baB is achieved by either of two , time-dependent numerical input variables , and ., modifies the activity of IKK while modifies the efficiency of I\u03baB translation ., Both have a finite range of and have unstimulated , wildtype values of and , respectively ., The inflammatory stimulus TNF is modeled using a unique function of derived from in vitro kinase assays 30 ., Since these assays only measured IKK activity out to 4 hours , we extended each stimulus by assuming the value of at 4 hours is maintained out to 24 hours ., Justification for this can be found in the 24-hour kinase assays in 86 , which shows no IKK activity between 8 and 24 hours after TNF stimulation ., UV stimulation is modeled using a step decrease in the value of from 1 . 0 to 0 . 5 for the entire 24 hours ., This mimics the 50% reduction in translational efficiency observed in 13 ., Steady-state analysis of this model revealed that over 99% of all I\u03baB was degraded via either of two pathways , futile ( 92% ) and productive ( 7% ) ., See Figure 5B for the composition of these pathways ., To modify the flux through either pathway without altering any of the steady-state abundances , the algebraic method described above proved absolutely necessary .","headings":"Introduction, Results, Discussion, Methods","abstract":"Stimulus-induced perturbations from the steady state are a hallmark of signal transduction ., In some signaling modules , the steady state is characterized by rapid synthesis and degradation of signaling proteins ., Conspicuous among these are the p53 tumor suppressor , its negative regulator Mdm2 , and the negative feedback regulator of NF\u03baB , I\u03baB\u03b1 ., We investigated the physiological importance of this turnover , or flux , using a computational method that allows flux to be systematically altered independently of the steady state protein abundances ., Applying our method to a prototypical signaling module , we show that flux can precisely control the dynamic response to perturbation ., Next , we applied our method to experimentally validated models of p53 and NF\u03baB signaling ., We find that high p53 flux is required for oscillations in response to a saturating dose of ionizing radiation ( IR ) ., In contrast , high flux of Mdm2 is not required for oscillations but preserves p53 sensitivity to sub-saturating doses of IR ., In the NF\u03baB system , degradation of NF\u03baB-bound I\u03baB by the I\u03baB kinase ( IKK ) is required for activation in response to TNF , while high IKK-independent degradation prevents spurious activation in response to metabolic stress or low doses of TNF ., Our work identifies flux pairs with opposing functional effects as a signaling motif that controls the stimulus-sensitivity of the p53 and NF\u03baB stress-response pathways , and may constitute a general design principle in signaling pathways .","summary":"Eukaryotic cells constantly synthesize new proteins and degrade old ones ., While most proteins are degraded within 24 hours of being synthesized , some proteins are short-lived and exist for only minutes ., Using mathematical models , we asked how rapid turnover , or flux , of signaling proteins might regulate the activation of two well-known transcription factors , p53 and NF\u03baB ., p53 is a cell cycle regulator that is activated in response to DNA damage , for example , due to ionizing radiation ., NF\u03baB is a regulator of immunity and responds to inflammatory signals like the macrophage-secreted cytokine , TNF ., Both p53 and NF\u03baB are controlled by at least one flux whose effect on activation is positive and one whose effect is negative ., For p53 these are the turnover of p53 and Mdm2 , respectively ., For NF\u03baB they are the TNF-dependent and -independent turnover of the NF\u03baB inhibitor , I\u03baB ., We find that juxtaposition of a positive and negative flux allows for precise tuning of the sensitivity of these transcription factors to different environmental signals ., Our results therefore suggest that rapid synthesis and degradation of signaling proteins , though energetically wasteful , may be a common mechanism by which eukaryotic cells regulate their sensitivity to environmental stimuli .","keywords":"cellular stress responses, signaling networks, mathematics, stress signaling cascade, regulatory networks, biology, nonlinear dynamics, systems biology, biochemical simulations, signal transduction, cell biology, computational biology, molecular cell biology, signaling cascades","toc":null} +{"Unnamed: 0":2446,"id":"journal.ppat.1007818","year":2019,"title":"Clonorchis sinensis excretory-secretory products increase malignant characteristics of cholangiocarcinoma cells in three-dimensional co-culture with biliary ductal plates","sections":"Cholangiocarcinoma ( CCA ) is an aggressive malignancy of the bile duct epithelia associated with local invasiveness and a high rate of metastases ., It is the second most common primary hepatic tumor after hepatocellular carcinoma , which is considered to be a highly lethal cancer with a poor prognosis due to the difficulty in accurate early diagnosis 1 ., There are several established risk factors for CCA , including primary cholangitis , biliary cysts and hepatolithiasis 2 ., Another critical factor is infection with the liver flukes Opisthorchis viverrini and Clonorchis sinensis , resulting in the highest incidences of CCA being in Southeast Asian countries 3 ., The proposed mechanisms of liver fluke-associated cholangiocarcinogenesis include mechanical damage to bile duct epithelia resulting from the feeding activities of the worms , infection-related inflammation , and pathological effects from their excretory-secretory products ( ESPs ) , consisting of a complex mixture of proteins and other metabolites ) 4 ., These coordinated actions provoke epithelial desquamation , adenomatous hyperplasia , goblet cell metaplasia , periductal fibrosis , and granuloma formation , all contributing to the production of a conducive tumor microenvironment ., Eventually , malignant cholangiocytes undergo uncontrolled proliferation that leads to the initiation and progression of CCA 5 ., Like other parasitic helminths , liver flukes release ESPs continuously during infection , in this case into bile ducts and surrounding liver tissues ., These substances play pivotal roles in host\u2013parasite interactions 6 ., Exposure of human CCA cells and normal biliary epithelial cells to liver fluke ESPs results in diverse pathophysiological responses , including proliferation and inflammation 7 , 8 ., Additionally , profiling of differential cancer-related microRNAs ( miRNAs ) expression has revealed that the miRNAs involved in cell proliferation and the prevention of tumor suppression are dysregulated in both CCA cells and normal cholangiocytes exposed to C . sinensis ESPs 9 ., These results suggest that there are ESP-responsive pathologic signal cascades that are common to both cancerous and non-cancerous bile duct epithelial cells ., Another aspect of carcinogenic transformation is the tissue microenvironment , consisting of the extracellular matrix ( ECM ) and surrounding cells and is a crucial factor in the regulation of cancer cell motility and malignancy 10 ., The diverse responses of tumor cells , cholangiocytes , and immune cells in the CCA microenvironment cooperatively affect cancer progression , including invasion , and\/or metastasis 11 ., Chronic inflammation of the bile duct due to the presence of liver flukes is closely associated also with the development of CCA , because it causes biliary epithelial cells to produce various cytokines and growth factors including interleukin-6 , -8 ( IL-6 , -8 ) , transforming growth factor-\u03b2 ( TGF-\u03b2 ) , tumor necrosis factor-\u03b1 ( TNF-\u03b1 ) , platelet-derived growth factor and epithelial growth factor 12 ., Exposure to cytokines and growth factors induces their endogenous production by CCA cells through a crosstalk loop , enhancing malignant features such as invasion , metastasis , chemoresistance and epithelial-mesenchymal transition ( EMT ) 13 ., Cytokines driven by chronic inflammation contribute to the pathogenesis of CCA and should be collectively considered in studies on tumor microenvironment ., We have established a three-dimensional ( 3D ) cell culture assay previously that contains a gradient of C . sinensis ESPs in the ECM and mimics the complex CCA microenvironment ., In this previous study , CCA cells ( HuCCT1 ) were morphologically altered to form aggregates in response to C . sinensis ESPs , and these CCA cells could only invade the type I collagen ( COL1 ) hydrogel scaffold in response to ESP gradient treatment ., This response was accompanied with an elevation of focal adhesion protein expression and the secretion of matrix metalloproteinase ( MMP ) isoforms 14 , suggesting that C . sinensis ESPs may promote CCA progression ., Additionally , this study revealed the chemoattractant effect of C . sinensis ESP gradients for CCA cells and to expand this work , we explored the more complicated tumor microenvironment subjected to ESPs from C . sinensis ., In the present study , we developed an in vitro clonorchiasis-associated tumor microenvironment model that consisted of the following factors: ( 1 ) a 3D culture system of normal cholangiocytes using a microfluidic device as 3D quiescent biliary ductal plates on ECM; ( 2 ) physiological co-culture of CCA cells with normal cholangiocytes coupled to the directional application of C . sinensis ESPs to reconstitute a 3D CCA microenvironment; and ( 3 ) visualization and assessment of the interactions between tumor cells and their microenvironments to assess how the malignant progression of CCA corresponds with carcinogenic liver fluke infestation ( Fig 1 ) ., To reconstitute the microenvironment of a normal bile duct on an ECM , H69 cells were cultured three dimensionally on a COL1 hydrogel within a microfluidic device ., The cells formed an epithelial layer and sprouted 3-dimensionally into the hydrogel one day after seeding ( Fig 2A ) ., The sprouts formed 3D tube-like structures resembling newly-developed small bile ducts ( Fig 2A and 2B ) ., This morphological change can be referred to as cholangiogenesis , and hepatic neoductule formation from an existing biliary ductal plate 15 ., The sprouting was suppressed in this study to form quiescent mature biliary ductal plates by varying the composition of the culture medium , namely complete , fetal bovine serum-free ( FBS ( - ) ) , and FBS-free\/epidermal growth factor-depleted ( FBS ( - ) \/EGF ( - ) ) ., In complete culture medium , H69 cells dynamically sprouted and expanded into the COL1 hydrogel and the boundary between the biliary ductal plate and the COL1 hydrogel ( Fig 2C , Day 4 ) moved far from the initial cell seeding point ( Fig 2C , Day 1 ) ., In the absence of FBS and EGF , cholangiogenesis decreased dramatically ( Fig 2D ) ., Additionally , the H69 cells in FBS-free\/EGF-depleted medium were in G0 phase ( Fig 2E ) and expressed a basolateral polarity marker ( Integrin \u03b16 ) along the region of the COL1 hydrogel scaffold that was in contact with the cell layer ( Fig 2F ) 16 ., Therefore , we designated this cluster of H69 cells as representing a quiescent 3D biliary ductal plate ., The mechanical properties of the COL1 hydrogel were modulated by altering the initial pH or concentration to identify other factors that could suppress the H69 cell sprouting ., When the pH of the COL1 solution prior to gelation was basic ( pH 11 ) , the resulting hydrogel was stiffer than one gelled at pH 7 . 4 and one produced with a high concentration ( 2 . 5 mg\/mL ) 17 ., H69 cells on stiffer COL1 hydrogel showed more numerous sprouts with larger surface areas ( Fig 3A and 3B ) ., H69 cells cultured on normal COL1 hydrogels ( 2 . 0 mg\/mL and pH 7 . 4 ) and in FBS ( - ) \/EGF ( - ) medium formed a quiescent biliary ductal plate ., To mimic C . sinensis infestation , ductal plates were treated with ESPs ( 4 \u03bcg\/mL ) by either application to the channel containing the H69 cells ( direct application ) or to the other channel of the microfluidic device ( gradient application ) ., After gradient application , ESPs diffused through the COL1 hydrogel and toward the basal side of the biliary duct plate , forming a complex concentration profile ., Computational simulation results showed that after 24 hours ESP concentration reached 3 \u03bcg\/mL at the apical side of the biliary ductal plate ( 2~2 . 5 \u03bcg\/mL at basal ) upon direct application , and 1 . 5~2 \u03bcg\/mL at the basal side of the biliary ductal plate ( 1 \u03bcg\/mL at apical ) upon gradient application ( Fig 4A ) ., Based on the observation that the H69 cells produced three stratified layers and each layer is 10 \u03bcm in thick , ~40% of the local ESP concentration difference was applied to a single HuCCT1 cell entered the biliary ductal plate under gradient application ., The 3D biliary ductal plate was stably maintained and remained healthy after either type of ESP treatment and neither treatment induced cholangiogenesis ( Fig 4B and 4C ) ., HuCCT1 CCA cells labeled with GFP were seeded onto the apical side of the 3D quiescent biliary ductal plate formed by H69 cells under the culture condition as defined above ., The HuCCT1 cells were then exposed to ESPs ( direct or gradient ) for 3 days ., After the ESP treatment , the HuCCT1 cells actively invaded the biliary ductal plate and reached the COL1 hydrogel ( Fig 5A ) ., After gradient ESP application , 1 . 71-fold and 1 . 85-fold more HuCCT1 cells invaded the biliary ductal plate compared to non-treated control , or those treated with direct application , respectively ( Fig 5B and 5D ) ., Interestingly , the number of individualized HuCCT1 cells in the biliary duct layer and COL1 hydrogel were similar , both after gradient and direct treatment ( Fig 5C and 5D ) ., It has been reported that elevated plasma levels of IL-6 and TGF-\u03b21 are correlated with histophathological changes in the livers of C . sinensis-infected mice 18 , 19 ., Moreover , interaction of these cytokines appears to be assoiciated with an increased malignancy of CCA cells 13 ., These findings prompted us to examine whether IL-6 and TGF-\u03b21 were involved in invasion and migration of HuCCT1 cells in our system ., First , we measured IL-6 and TGF-\u03b21 levels in the culture supernatants of ESP-treated H69 cells using ELISA , and found that secretion of both IL-6 and TGF-\u03b21 was significantly elevated at 12 hours post-ESP treatment , compared to the non-treated control ( Fig 6A ) ., Elevated secretion of IL-6 was maintained at 24 hours and increased further increased by 48 hours , while the TGF-\u03b21 secretion level increased in a time-dependent manner ., To assess the crosstalk of IL-6 and TGF-\u03b21 from H69 cells with co-cultured HuCCT1 cells , the induction of IL-6 and TGF-\u03b21 in ESP-treated H69 cells was attenuated by means of small interfering ( si ) RNA transfection ., The culture supernatants from each of four groups of 48 hour-ESP-treated H69 cells ( transfected with siRNAs of scrambled oligonucleotide or with siRNAs for IL-6 , TGF-\u03b21 or both ) were substituted for 24 hour-ESP-treated medium in HuCCT1 cell cultures ., Then , these HuCCT1 cells were incubated further for 48 hours and their culture supernatants were analyzed using ELISA ., The levels of both ESP-induced IL-6 and TGF-\u03b21 secretion by HuCCT1 cells , as well as H69 cells , were significantly decreased in the respective siRNA transfectants , when compared with those of untransfected and scrambled siRNA-transfected . cells ( Fig 6B ) ., Moreover , a greater reduction in the secretion of these cytokines was observed when using the supernatant from cells treated with siRNA for both IL-6 and TGF-\u03b21 siRNA than when using ones from cells treated with either siRNA alone , suggesting that an IL-6\/TGF-\u03b21 autocrine\/paracrine signaling network may be in effect between non-cancerous and cancerous co-cultured cells ., Next , we examined ESP-mediated changes in E- and N-cadherin expression in HuCCT1 and H69 cells , which are , respectively , epithelial and mesenchymal markers , that are regarded as functionally significant factors in cancer progression ., Decreased amounts of immunoreactive E-cadherin were detected in HuCCT1 cells following 24 hours post-ESP treatment , and this decreased further at 48 hours ., Increased expression of N-cadherin was obvious at 24 hours post-ESP treatment and maintained up to 48 hours ( Fig 7A ) ., However , in H69 cells , the expression of E-cadherin was significantly elevated at 24 hours and increased further subsequently , while there was no substantial change in N-cadherin expression during the same period of ESP treatment ( Fig 7B ) ., This suggests that ESPs may contribute to facilitating EMT-like changes only in HuCCT1 cells , leading to the promotion of migration\/invasion ., Finally , we evaluated the involvement of IL-6 and TGF-\u03b21 in the cadherin switching of HuCCT1 cells treated with culture supernatants from siRNA-treated cells described above ., Silencing of IL-6 and TGF-\u03b21 markedly attenuated the reduction of E-cadherin and the elevation of N-cadherin expression induced by ESPs ., The levels of E- and N-cadherin expression in double silencing supernatant-treated HuCCT1 cells were almost the same as those of the non-treated control ( Fig 7C ) , indicating that the IL-6 and TGF-\u03b21 expression induced by the ESPs contributed to EMT progression in HuCCT1 cells ., The microfluidic model of a CCA tumor microenvironment used in this study consisted of a quiescent 3D biliary ductal plate formed by H69 cells ( cholangiocytes ) on a COL1 ECM that has been stimulated by C . sinensis ESPs ., HuCCT1 cells ( CCA cells ) responded to microenvironmental factors actively by proliferating , migrating and invading the 3D biliary ductal plate and passing into the neighboring ECM ., HuCCT1 cells exhibited different cellular behaviors when co-cultured on the biliary duct layer , compared to when they were cultured on ECM alone , as described previously 14 ., As a CCA tumor microenvironment factor , characteristics of normal cholangiocytes were carefully investigated , and compared with previous reports 20 ., Ishida Y et al . reported ductular morphogenesis and functional polarization of human biliary epithelial cells when embedded three dimensionally in a COLI hydrogel 21 ., Tanimizu N et al . also reported the development of a 3D tubular-like structure during the differentiation of mouse liver progenitor cells 16 , 22 ., However , these traditional dish-based culture platforms only generated 3D tube-like structures whose apical-basal polarities differed from those observed in vivo , and which were unsuitable for co-culturing with CCA cells to monitor tumor malignancy changes upon invasion\/migration in a tumor milieu ., In the microfluidic 3D culture platform described here , H69 cells formed a cholangiocyte layer and sprouted into the COL1 hydrogel ., This mimicked an asymmetrical ductal structure at the parenchymal layer on the portal vein side and a primitive ductal structure during the early stage of biliary tubulogenesis during cholangiogenesis 15 ., H69 cholangiocytes lining small bile ducts are layer-forming biliary epithelial cells with a potential proliferative capacity , but under normal conditions are quiescent or in the G0 state of the cell cycle 23 ., The mechanical and biochemical properties of the ECM and culture medium within the microenvironment of the biliary epithelium were characterized and shown to be conducive to the formation of a stable 3D biliary ductal plate and primitive ductal structure , which are crucial steps in cholangiogenesis ., The reconstituted biliary ductal plate on the ECM formed a 3D CCA tumor microenvironment , which was then seeded with CCA tumor cells and treated with C . sinensis ESPs ., Many publications have described the co-culture of tumor cells with stromal cells ( mainly fibroblasts ) and reported upregulated tumor cell malignancy , however , only a few of these studied CCA cells 24 ., One study co-cultured various CCA cells ( HuCCT1 and MEC ) with hepatic stellate cells as a CCA stroma and reported increased invasion and proliferation by CCA cells 25 ., To our knowledge , this is the first attempt to construct a co-culture system that facilitates the direct contact of normal and CCA tumor cells from a single type of tissue; both cells were cultured separately and the combined to produce the pathophysiological effect ., Therefore our work describes an advanced method for orchestrating complex CCA microenvironments , especially in 3D ., The first components of the CCA microenvironment are growth factors , which are present in the culture medium and are candidates for promoting the proliferation , differentiation and migration of cholangiocytes ., FBS should be preferentially excluded to arrest the cells in a quiescent state in order to assess the direct effects of C . sinensis ESPs on CCA malignancy ., Although the precise effect of EGF on normal cholangiocytes remains to be elucidated , key roles of EGF in biliary duct development and cholangiocytes differentiation including cholangiogenesis and neoductule formation from an existing biliary ductal plate , have been reported 16 , 26 ., Additionally , signaling via EGF and its receptor ( EGFR ) facilitates the progression of hepato-cholangiocellular cancer 27 , 28 ., Thus , we excluded EGF from our 3D co-culture model system because the complex and diverse roles of EGF might mask direct ESP-dependent effects on the CCA microenvironment ., The second component in the microenvironment was the COL1 ECM ., The mechanical properties of the COL1 hydrogel , such as fibril diameter and stiffness , can be altered by controlling the collagen concentrations or adjusting the pH in of the collagen solution prior to gel casting ., High pH reduces the diameter of COL1 nanofibers after gelation and this increases the stiffness of the COL1 hydrogel dominantly; the linear modulus of a COL1 hydrogel produced from pH 11 and 2 . 0 mg\/mL is 2 . 7-fold and 3 . 1-fold higher than those from pH 7 . 4 and 2 . 5 mg\/mL and from pH 7 . 4 and 2 . 0 mg\/mL , being approximately 53 kPa , 20 kPa and 17 kPa , respectively 17 ., It has been reported that hepatobiliary cells express diverse cellular behaviors with respect to proliferation , differentiation , adhesion , and migration under stiff ECM conditions , with close association in development , homeostasis and disease progression 29 , 30 ., The morphological changes in H69 cells on stiff COL1 probably reflect the highly-activated proliferation of cholangiocytes ( ductular reaction ) in liver fibrosis , and \u201catypical\u201d proliferation of cholangiocytes commonly seen in patients with prolonged cholestatic liver diseases , such as primary sclerosing cholangitis or primary biliary cirrhosis 31 , 32 ., C . sinensis ESPs were the third component of the CCA microenvironment considered in this study , and ESP stimuli induced 3D morphological changes in biliary ductal plate ., Some H69 cells in the 3D biliary ductal plate grown on a COL1 hydrogel interacted with it by sprouting; however , the majority maintained the layer structure during the entire experimental periods , independent of direct or gradient ESP application ( Fig 4B and 4C ) ., While no obvious changes in N-cadherin expression in H69 cells were observed , the expression of E-cadherin in H69 cells was increased gradually in a time-dependent manner during the experiments ( Fig 7B ) , implying that the ESPs may cause H69 cells to exhibit more epithelial characteristics ., However , we do not rule out the possibility that more intense stimulation , such as with a higher dose of ESPs and\/or longer exposure times , could produce EMT-like effects in H69 cells ., We determined that ESPs are implicated in the acquisition of CCA malignant characteristics; increased invasion and migration ., Single cell invasion by HuCCT1 cells was similarly increased by both direct and gradient ESP application , while migration increased significantly upon gradient application only ., The concentration profile produced by computational simulation explained these differential effects on HuCCT1 cells; the average concentration of ESPs over the entire area of the 3D biliary ductal plate was estimated at over 1 . 5 \u03bcg\/mL and the H69 cells forming the biliary ductal plate were exposed to high concentrations of ESPs ( over 800 ng\/mL ) , sufficient to induce significantly increased levels of IL-6 and TGF-\u03b21 ( Fig 4A , red dotted line ) ., In contrast , the concentration of ESPs over the channel where HuCCT1 cells were seeded was considerably higher after direct application versus gradient application; yet , the magnitude of the effect on both migration and invasion was smaller ( Fig 5 ) ., These results suggest multiple pathological effects of ESPs in the CCA microenvironment , such as the stimulation of normal tissues near the CCA and the chemoattraction of CCA cells ., It has been reported previously that C . sinensis ESP-triggered CCA cell migration\/invasion is mediated by ERK1\/2-NF-\u03baB-MMP-9 and integrin \u03b24-FAK\/Src pathways , suggesting that ESPs may function as detrimental modulators of the aggressive progression of liver fluke-associated CCA 33 , 34 ., In the present study , the morphological features of HuCCT1 cells exposed to ESPs in 3D co-culture with H69 cells differed from those of HuCCT1 cells cultured alone ., Co-cultured HuCCT1 cells exhibited increased motility , as represented by single cell invasion , while HuCCT1 cells exhibited aggregation in the 3D culture 14 ., This implies that the interaction between HuCCT1 and H69 cells contributes to a change in HuCCT1 cell phenotype ., Cytokines generated by various types of cells within the tumor microenvironment play pro- or anti-tumorigenic roles , depending on the balance of different immune mediators and the stage of tumor development 35 ., During liver fluke infection , chronically-inflamed epithelia are under constant stimulation to participate in the inflammatory response by continuous secretion of chemokines and cytokines ., This creates a vulnerable microenvironment that may promote malignant transformation and even cholagiocarcinogenesis ., IL-6 is considered a proinflammatory cytokine that has typically pro-tumorigenic effects during infection ., Liver cell lines , including H69 cells , preferentially take up O . viverrini ESPs by endocytosis , resulting in proliferation and increased secretion of IL-6 7 ., Elevated plasma concentrations of IL-6 are associated with a significant dose-dependent increase in the risk of opisthorchiasis-associated advanced periductal fibrosis and CCA 36 ., The TGF-\u03b2-mediated signaling pathway is involved in all stages of liver disease progression from initial inflammation-related liver injury to cirrhosis and hepatocellular carcinoma 37 ., A crude antigen from C . sinensis differentiates macrophage RAW cells into dendritic-like cells and upregulates ERK-dependent secretion of TGF-\u03b2 , which modulates the host\u2019s immune responses 38 ., C . sinensis infection activates TGF-\u03b21\/Smad signaling promoting fibrosis in the livers of infected mice 19 ., Additionally , it has been reported that the E\/N-cadherin switch via TGF-\u03b2-induced EMT is correlated with cancer progression of CCA cells and the survival of patients with extrahepatic CCA 39 , 40 ., Consistent with these studies , we observed that the decreased E-cadherin and increased N-cadherin expression in ESP-exposed HuCCT1 cells ( Fig 7A ) was associated with increased secretion of IL-6 and TGF-\u03b21 by H69 cells ( Fig 6A ) as well as by HuCCT1 cells , as reported previously 41 ., The cytokine mediated-interaction between H69 and HuCCT1 cells was evaluated by means of siRNAs , which the levels of IL-6 and TGF-\u03b21 secretion were suppressed in the culture supernatants of siRNA-IL-6 and -TGF-\u03b21 H69 transfectants ( Fig 6B ) ., The suppression of these cytokines was correlated with an impairment of the change in E-\/N-cadherin expression in HuCCT1 cells triggered by the ESPs ( Fig 7C ) ., This suggests that local accumulation of these cytokines , as the result of constitutive and dysregulated secretion of both cell types , promotes a more aggressive pathogenic process in the tumor milieu ., Therefore , it is tempting to speculate that ESPs facilitate a positive feedback loop of elevated inflammatory cytokine secretion in both non-cancerous and cancerous cells , triggering an E\/N-cadherin switch in HuCCT1 cells that subsequently increased invasion and\/or migration mediated by the EMT ., We will conduct future studies to explore this possibility ., In conclusion , HuCCT1 cells exhibited elevated single cell invasion after both direct and gradient ESP application , with increased migration occurring only after gradient treatment ( ESPs applied to the basal side ) ., These changes were caused by coordinated interactions between normal cholangiocytes , CCA cells and C . sinensis ESPs , which resulted in increased secretion of IL-6 and TGF-\u03b21 and a cadherin switch in ESP-exposed cells ., Therefore , the combined effects of these detrimental stimulations in both cancerous and non-cancerous bile duct epithelial cells during C . sinensis infection may facilitate a more aggressive phenotype of CCA cells , such as invasion\/migration , resulting in more malignant characteristics of the CCA tumor ., Our findings broaden our understanding of the molecular mechanism underlying the progression of CCA caused by liver fluke infection ., These observations provide a new basis for the development of chemotherapeutic strategies to control liver fluke-associated CCA metastasis and thereby help to reduce its high mortality rate in the endemic areas ., Cell culture medium components were purchased from Life Technologies ( Grand Island , NY ) , unless otherwise indicated ., Polyclonal antibodies against the following proteins were purchased from the indicated sources: Ki-67 and integrin \u03b16 ( Abcam , Cambridge , UK ) ; E-cadherin ( BD Biosciences , San Jose , CA ) ; N-cadherin ( Santa Cruz Biotechnology , Santa Cruz , CA ) ; glyceraldehyde-3-phosphate dehydrogenase ( GAPDH; AbFrontier Co . , Seoul , Korea ) ., Horseradish peroxidase ( HRP ) -conjugated secondary antibodies were obtained from Jackson ImmunoResearch Laboratory ( West Grove , PA ) ., All other chemicals were obtained from Sigma-Aldrich ( St . Louis , MO ) ., Human HuCCT1 cholangiocarcinoma cells ( originally established by Miyagiwa et al . in 1989 42 ) was maintained in RPMI 1640 medium supplemented with 1% ( v\/v ) penicillin\/streptomycin and 10% FBS ., Human H69 cholangiocyte cells , which are SV40-transformed bile duct epithelial cells derived from non-cancerous human liver 43 , were kindly provided by Dr . Dae Ghon Kim of the Department of Internal Medicine , Chonbuk National University Medical School , Jeonju , Korea ., H69 cells were grown in DMEM\/F12 ( 3:1 ) containing 10% FBS , 100 U\/mL penicillin , 100 \u03bcg\/ml streptomycin , 5 \u03bcg\/ml of insulin , 5 \u03bcg\/ml of transferrin , 2 . 0 \u00d710\u22129 M triiodothyronine , 1 . 8 \u00d7 10\u22124 M adenine , 5 . 5 \u00d7 10\u22126 M epinephrine , 1 . 1 \u00d7 10\u22126 M hydrocortisone , and 1 . 6 \u00d7 10\u22126 M EGF ., Both cell types were cultured at 37\u00b0C in a humidified atmosphere containing 5% CO2 ., Clonal cell lines that stably expressed a green fluorescent protein ( GFP ) were generated by transfection of HuCCT1 cells ., Briefly , HuCCT1 cells were grown to ~70% confluence and were transfected using Lipofectamine 2000 ( Invitrogen , Calsbad , CA ) and a pGFP-C1 vector ( Clontech Laboratories , Inc . , Palo Alto , CA ) for 24 hours ., To generate stable lines , the cells were cultured for 3 weeks in a complete medium containing 1 mg\/ml G 418 disulfate salt ( Sigma-Aldrich ) that was changed every 2~3 days ., Colonies with uniform GFP fluorescence were screened and two clonal cell lines with approximately similar levels of GFP overexpression were chosen for further experiments ., Adult C . sinensis specimens for the preparation of ESPs were obtained from infected , sacrificed New Zealand albino rabbits to collect adult worms ., Animal care and experimental procedures were performed in strict accordance with the national guidelines outlined by the Korean Laboratory Animal Act ( No . KCDC-122-14-2A ) of the Korean Centers for Disease Control and Prevention ( KCDC ) ., The KCDC-Institutional Animal Care and Use Committee ( KCDC-IACUC ) \/ethics committee reviewed and approved the ESPs preparation protocols ( approval identification number; KCDC-003-11 ) ., The ESPs from C . sinensis adult worms were prepared as described previously 41 ., Briefly , adult worms were recovered from the bile ducts of male New Zealand albino rabbits ( 12 weeks old ) orally infected with ~500 metacercariae 12 weeks earlier ., Worms were washed several times with cold phosphate-buffered saline ( PBS ) to remove any host contaminants ., Five fresh worms were cultured in 1 mL of prewarmed PBS containing a mixture of antibiotics and protease inhibitors ( Sigma-Aldrich ) for 3 hours at 37 \u00b0C in a 5% CO2 environment ., Then the culture fluid was pooled , centrifuged , concentrated with a Centriprep YM-10 ( Merck Millipore , Billerica , MA ) membrane concentrator , and filtered through a sterile 0 . 2-\u03bcm syringe membrane ., After measuring the ESP protein concentration , the aliquots were stored at \u221280\u00b0C until use ., The microfluidic device was prepared as described previously 14 ., Briefly , the microfluidic devices were produced by curing polydimethylsiloxane ( PDMS , Silgard 184 , Dow Chemical , Midland , MI ) overnight on a micro-structure-patterned wafer at 80\u00b0C ., The device was punched to produce ports for the hydrogel and cell suspension injections ., After sterilization , the device and s glass coverslip ( 24 \u00d7 24 mm; Paul Marienfeld , Germany ) were permanently bonded to each other and the surfaces of microchannels in the device were coated with poly-D-lysine by treatment 1 mg\/mL solution ., The devices were stored under a sterile condition until use ., The gel region of microfluidic device was filled with an unpolymerized COL1 solution ( 2 . 0 mg\/mL , pH 7 . 4 ) and then placed in a 37\u00b0C humidified chamber to polymerize the hydrogel ., EGF-depleted H69 medium containing 1% FBS was injected into the medium channels to prevent shrinkage of the COL1 hydrogel , and the devices were stored at 37\u00b0C in a 5% CO2 incubator until cell seeding ., H69 cells ( 5 \u00d7 105 cells ) suspended in conditional medium ( FBS-free , EGF-depleted ) were loaded into one medium port ., After filling a medium channel by the cells in the suspension by hydrostatic flow , the device was positioned vertically for 2 hours at 37\u00b0C in a 5% CO2 incubator to allow the cells to attach to the COL1 hydrogel wall by gravity ., One day after seeding with H69 cells , HuCCT1-GFP cells suspended in conditional medium at 10 \u00d7 105 cells\/mL were seeded into the cell channel in a manner identical to the H69 cells ., ESPs were diluted in conditional medium to a concentration of 4 \u03bcg\/mL and then added either to the cell channel ( direct application ) or to the medium channel ( gradient application ) ., The medium was replaced every day with fresh conditional medium supplemented with ESPs ( Fig 1B ) ., H69 cells cultured in a microfluidic device were washed twice with PBS and fixed with a 4% paraformaldehyde solution for 30 minutes ., A 0 . 1% Triton X-100 solution was treated to permeabilize the cell membranes for 10 minutes ., The cells were incubated with 1% bovine serum albumin and primary antibodies against Ki67 or Integrin \u03b16 ( 1:1000 dilution ) , followed by Alexa Fluor 488 secondary antibody ( 1:1000 dilution; Invitrogen ) ., After staining with 4 , 6-Diamidino-2-Phenylindole ( DAPI , 1:1000 dilution , Invitrogen ) , and rhodamine phalloidin ( to stain F-actin , 1:200 dilution , Invitrogen ) , the cells were examined by a confocal laser-scanning microscope ( LSM700; Carl Zeiss , Jena , Germany ) and by fluorescent microscope ( Axio Observer Z1; Carl Zeiss , Jena , Germany ) ., We used the siRNAs ( Ambion Silencer Select ) of IL-6 , TGF- \u03b21 , and scrambled oligonucleotide as a negative control from Thermo Fisher Scientific ( Waltham , MA ) ., H69 or HuCCT1 cells were seeded on 24-well culture plate and transiently transfected with either each or both target-specific siRNAs using Lipofectamine RNAiMAX ( Invitrogen ) according with the manufacturer\u2019s protocols ., Each siRNA transfection was performed in quadruplicate ., After 24 hours , the transfection mixture on the cells was replaced with fresh culture medium ., At 60 hour after transfection , H69 cells were depleted of FBS gradually , followed by incubation in conditional medium supplemented with 800 ng\/mL ESPs for 48 hours ., The culture supernatants from H69 cells were collected and clarified by brief centrifugation ., Then , the 24 hour-ESP ( 800 ng\/mL ) -treated medium of HuCCT1 cell","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Clonorchis sinensis is a carcinogenic human liver fluke , prolonged infection which provokes chronic inflammation , epithelial hyperplasia , periductal fibrosis , and even cholangiocarcinoma ( CCA ) ., These effects are driven by direct physical damage caused by the worms , as well as chemical irritation from their excretory-secretory products ( ESPs ) in the bile duct and surrounding liver tissues ., We investigated the C . sinensis ESP-mediated malignant features of CCA cells ( HuCCT1 ) in a three-dimensional microfluidic culture model that mimics an in vitro tumor microenvironment ., This system consisted of a type I collagen extracellular matrix , applied ESPs , GFP-labeled HuCCT1 cells and quiescent biliary ductal plates formed by normal cholangiocytes ( H69 cells ) ., HuCCT1 cells were attracted by a gradient of ESPs in a concentration-dependent manner and migrated in the direction of the ESPs ., Meanwhile , single cell invasion by HuCCT1 cells increased independently of the direction of the ESP gradient ., ESP treatment resulted in elevated secretion of interleukin-6 ( IL-6 ) and transforming growth factor-beta1 ( TGF-\u03b21 ) by H69 cells and a cadherin switch ( decrease in E-cadherin\/increase in N-cadherin expression ) in HuCCT1 cells , indicating an increase in epithelial-mesenchymal transition-like changes by HuCCT1 cells ., Our findings suggest that C . sinensis ESPs promote the progression of CCA in a tumor microenvironment via the interaction between normal cholangiocytes and CCA cells ., These observations broaden our understanding of the progression of CCA caused by liver fluke infection and suggest a new approach for the development of chemotherapeutic for this infectious cancer .","summary":"The oriental liver fluke , Clonorchis sinensis , is a biological carcinogen of humans and is the cause of death of infectious cancer patients in China and Korea ., Its chronic infection promotes cholangiocarcinogenesis due to direct contact of host tissues with the worms and their excretory-secretory products ( ESPs ) ; however , the specific mechanisms underlying this pathology remain unclear ., To assess its contribution to the progression of cholangiocarcinoma ( CCA ) , we developed a 3-dimensional ( 3D ) in vitro culture model that consists of CCA cells ( HuCCT1 ) in direct contact with normal cholangiocytes ( H69 ) , which are subsequently exposed to C . sinensis ESPs; therefore , this model represents a C . sinensis-associated CCA microenvironment ., Co-cultured HuCCT1 cells exhibited increased motility in response to C . sinensis ESPs , suggesting that this model may recapitulate some aspects of tumor microenvironment complexity ., Proinflammatory cytokines such as IL-6 and TGF-\u03b21 secreted by H69 cells exhibited a crosstalk effect regarding the epithelial-mesenchymal transition of HuCCT1 cells , thus , promoting an increase in the metastatic characteristics of CCA cells ., Our findings enable an understanding of the mechanisms underlying the etiology of C . sinensis-associated CCA , and , therefore , this approach will contribute to the development of new strategies for the reduction of its high mortality rate .","keywords":"biliary system, invertebrates, amorphous solids, medicine and health sciences, innate immune system, liver, immune physiology, cytokines, engineering and technology, helminths, gene regulation, immunology, carcinomas, cancers and neoplasms, gastrointestinal tumors, animals, trematodes, oncology, physiological processes, developmental biology, clonorchis sinensis, materials science, molecular development, adenocarcinomas, gels, microfluidics, fluidics, small interfering rnas, gene expression, flatworms, cholangiocarcinoma, immune system, bile ducts, biochemistry, rna, eukaryota, anatomy, nucleic acids, physiology, clonorchis, genetics, secretion, biology and life sciences, materials, physical sciences, non-coding rna, mixtures, organisms","toc":null} +{"Unnamed: 0":1652,"id":"journal.pcbi.1006657","year":2019,"title":"A data-driven interactome of synergistic genes improves network-based cancer outcome prediction","sections":"Metastases at distant sites ( e . g . in bone , lung , liver and brain ) is the major cause of death in breast cancer patients 1 ., However , it is currently difficult to assess tumor progression in these patients using common clinical variables ( e . g . tumor size , lymph-node status , etc . ) 2 ., Therefore , for 80% of these patients , chemotherapy is prescribed 3 ., Meanwhile , randomized clinical trials showed that at least 40% of these patients survive without chemotherapy and thus unnecessarily suffer from the toxic side effect of this treatment 3 , 4 ., For this reason , substantial efforts have been made to derive molecular classifiers that can predict clinical outcome based on gene expression profiles obtained from the primary tumor at the time of diagnosis 5 , 6 ., An important shortcoming in molecular classification is that \u2018cross-study\u2019 generalization is often poor 7 ., This means that prediction performance decreases dramatically when a classifier trained on one patient cohort is applied to another one 8 ., Moreover , the gene signatures found by these classifiers vary greatly , often sharing only few or no genes at all 9\u201311 ., This lack of consistency casts doubt on whether the identified signatures capture true \u2018driver\u2019 mechanisms of the disease or rather subsidiary \u2018passenger\u2019 effects 12 ., Several reasons for this lack of consistency have been proposed , including small sample size 11 , 13 , 14 , inherent measurement noise 15 and batch effects 16 , 17 ., Apart from these technical explanations , it is recognized that traditional models ignore the fact that genes are organized in pathways 18 ., One important cancer hallmark is that perturbation of these pathways may be caused by deregulation of disparate sets of genes which in turn complicates marker gene discovery 19 , 20 ., To alleviate these limitations , the classical models are superseded by Network-based Outcome Predictors ( NOP ) which incorporate gene interactions in the prediction model 21 ., NOPs have two fundamental components: aggregation and prediction ., In the aggregation step , genes that interact , belong to the same pathway or otherwise share functional relation are aggregated ( typically by averaging expressions ) into so called \u201cmeta-genes\u201d 22 ., This step is guided by a supporting data source describing gene-gene interactions such as cellular pathway maps or protein-protein interaction networks ., In the consequent prediction step , meta-genes are selected and combined into a trained classifier , similar to a traditional classification approach ., Several NOPs have been reported to exhibit improved discriminative power , enhanced stability of the classification performance and signature and better representation of underlying driving mechanisms of the disease 18 , 23\u201325 ., In recent years , a range of improvements to the original NOP formulation has been proposed ., In the prediction step , various linear and nonlinear classifiers have been evaluated26 , 27 ., Problematically , the reported accuracies are often an overestimation as many studies neglected to use cross-study evaluation scheme which more closely resembles the real-world application of these models 7 ., Also for the aggregation step , which is responsible for forming meta-genes from gene sets , several distinct approaches are proposed such as clustering 23 and greedy expansion of seed genes into subnetworks 18 ., Moreover , in addition to simple averaging , alternative means by which genes can be aggregated , such as linear or nonlinear embeddings , have been proposed 17 , 28 ., Most recent work combines these steps into a unified model 8 , 29 ., Recent efforts that extend these concepts to sequencing data by exploiting the concept of cancer hallmark networks have also been proposed 30 ., Despite these efforts and initial positive findings , there is still much debate over the utility of NOPs compared to classical methods , with several studies showing no performance improvement 21 , 31 , 32 ., Perhaps even more striking is the finding that utilizing a permuted network 32 or aggregating random genes 10 performs on par with networks describing true biological relationships ., Several meta-analyses attempting to establish the utility of NOPs have appeared with contradicting conclusions ., Notably , Staiger et al . compared performance of nearest mean classifier 33 in this setting and concluded that network derived meta-genes are not more predictive than individual genes 21 , 32 ., This is in contradiction to Roy et al . who achieved improvements in outcome prediction when genes were ranked according to their t-test statistics compared to their page rank property 34 in PPI network 28 , 35 ., It is thus still an open question whether NOPs truly improve outcome prediction in terms of predictive performance , cross-study robustness or interpretability of the gene signatures ., A critical\u2014yet often neglected\u2014aspect in the successful application of NOPs is the contribution of the biological network ., In this regard , it should be recognized that many network links are unreliable 36 , 37 , missing 38 or redundant 39 and considerable efforts are being made to refine these networks 38 , 40\u201342 ., In addition , many links in these networks are experimentally obtained from model organisms and therefore may not be functional in human cells 43\u201345 ., Finally , most biological networks capture only a part of a cell\u2019s multifaceted system 46 ., This incomplete perspective may not be sufficient to link the wide range of aberrations that may occur in a complex and heterogeneous disease such as breast cancer 47 , 48 ., Taken together , these issues raise concerns regarding the extent to which the outcome predictors may benefit from inclusion of common biological networks in their models ., In this work , we propose to construct a network ab initio that is specifically designed to improve outcome prediction in terms of cross-study generalization and performance stability ., To achieve this , we will effectively turn the problem around: instead of using a given biological network , we aim to use the labelled gene expression datasets to identify the network of genes that truly improves outcome prediction ( see Fig 1 for a schematic overview ) ., Our approach relies on the identification of synergistic gene pairs , i . e . genes whose joint prediction power is beyond what is attainable by both genes individually 49 ., To identify these pairs , we employed grid computing to evaluate all 69 million pairwise combinations of genes ., The resulting network , called SyNet , is specific to the dataset and phenotype under study and can be used to infer a NOP model with improved performance ., To obtain SyNet , and allow for rigorous cross-study validation , a dataset of substantial size is required ., For this reason , we combined 14 publicly available datasets to form a compendium encompassing 4129 survival labeled samples ., To the best of our knowledge , the data combined in this study represents the largest breast cancer gene expression compendium to date ., Further , to ensure unbiased evaluation , sample assignments in the inner as well as the outer cross-validations folds are kept equal across all assessments throughout the paper ., In the remainder of this paper , we will demonstrate that integrating genes based on SyNet provides superior performance and stability of predictions when these models are tested on independent cohorts ., In contrast to previous reports , where shuffled versions of networks also performed well , we show that the performance drops substantially when SyNet links are shuffled ( while containing the same set of genes ) , suggesting that SyNet connections are truly informative ., We further evaluate the content and structure of SyNet by overlaying it with known gene sets and existing networks , revealing marked enrichment for known breast cancer prognostic markers ., While overlap with existing networks is highly significant , the majority of direct links in SyNet is absent from these networks explaining the observed lack of performance when NOPs are guided by the phenotype-unaware networks ., Interestingly , SyNet links can be reliably predicted from existing networks when more complex topological descriptors are employed ., Taken together , our findings suggest that compared to generic gene networks , phenotype-specific networks , which are derived directly from labeled data , can provide superior performance while at the same time revealing valuable insight into etiology of breast cancer ., We first evaluated NOP performance for three existing methods ( Park , Chuang and Taylor ) and the Group Lasso ( GL ) when supplied with a range of networks , including generic networks , tissue-specific networks and SyNet ., As a baseline model , we used a Lasso classifier trained using all genes in our expression dataset ( n = 11748 ) without network guidance ., The Lasso exhibits superior performance among many linear and non-linear classifiers evaluated on our expression dataset ( see S3 for details ) ., The AUC of the four NOPs , presented in Fig 2 , clearly demonstrates that SyNet improves the performance of all NOPs , except for the Park method in which it performs on par to the Correlation ( Corr ) network ., Notably , SyNet is inferred using training samples only , which prevents \u201cselection bias\u201d in our assessments 50 ., Furthermore , comparison of baseline model performance ( i . e . Fig 2 , rightmost bar ) and other NOPs supports previous findings that many existing NOPs do not outperform regular classifiers that do not use networks 8 , 21 , 32 ., The GL clearly outperforms all other methods , in particular when it exploits the information contained in SyNet ., This corroborates our previous finding 8 that existing methods which construct meta-genes by averaging are suboptimal ( see S1 for a more extensive analysis ) ., The GL using the Corr network also outperforms the baseline model , albeit non-significantly ( p~0 . 6 ) , which is in line with previous reports 23 ., It should be noted that across all these experiments an identical set of samples is used to train the models so that any performance deviation must be due to differences in, ( i ) the set of utilized genes or, ( ii ) the integration of the genes into meta-genes ., In the next two sections , we will investigate these factors in more details ., Networks only include genes that are linked to at least one other gene ., As a result , networks can provide a way of ranking genes based on the number and weight of their connections ., One explanation for why NOPs can outperform regular classifiers is that networks provide an a priori gene ( feature ) selection 32 ., To test this hypothesis and determine the feature selection capabilities of SyNet , we compare classification performances obtained using the baseline classifier ( i . e . Lasso ) that is trained using enclosed genes in each network ., While this classifier performs well compared to other standard classifiers that we investigated ( see S3 for details ) , it cannot exploit information contained in the links of given network ., So , any performance difference must be due to the genes in the network ., The number of genes in each network under study is optimized independently by varying the threshold on the weighted edges in the network and removing unconnected genes ( see section \u201cRegular classifiers and Network based prediction models\u201d for network size optimization details ) ., The edge weight threshold and the Lasso regularization parameter were determined simultaneously using a grid search cross-validation scheme ( see S5 for details ) ., Fig 3 provides the optimal performances for 12 distinct networks along with number of genes used in the final model ( i . e . genes with non-zero Lasso coefficients ) ., We also included the baseline model where all genes ( n = 11748 ) are utilized to train Lasso classifier ( rightmost bar ) ., The results presented in Fig 3a demonstrate that SyNet is the only network that performs markedly better than the baseline model which is trained on all genes ., Interestingly , we observe that SyNet is the top performing network while utilizing a comparable number of genes to other networks ., The second-best network is the Corr network ., We argue that superior performance of SyNet over the Corr network stems from the disease specificity of genes in SyNet which helps the predictor to focus on the relevant genes only ., It should be noted that the data on which SyNet and the Corr network are constructed are completely independent from the validation data on which the performance is based due to our multi-layer cross-validation scheme ( see Methods and S5 ) which avoids selection bias 50 ., We conclude that dataset-specific networks , in particular SyNet which also exploits label information , provides a meaningful feature selection that is beneficial for classification performance ., Our result show that none of the tissue-specific networks outperform the baseline ., Despite the modest performance , it is interesting to observe that performance for these networks increases as more relevant tissues ( e . g . breast and lymph node networks ) are utilized in the classification ., Additionally , we observe that tissue-specific networks do not outperform the generic networks ., This may be the result of the fact that generic networks predominantly contain broadly expressed genes with fundamental roles in cell function which may still be relevant to survival prediction ., A similar observation was made for GWAS where SNPs in these widely-expressed genes can explain a substantial amount of missed heritability 51 ., In addition to classifier performance , an important motivation for employing NOPs is to identify stable gene signatures , that is , the same genes are selected irrespective of the study used to train the models ., Gene signature stability is necessary to confirm that the identified genes are independent of dataset specific variations and therefore are true biological drivers of the disease under study ., To measure the signature consistency , we assessed the overlap of selected genes across all repeats and folds using the Jaccard Index ., Fig 3b shows that a Lasso trained using genes preselected by SyNet , identifies more similar genes across folds and studies compared to other networks ., Surprisingly , despite the fact that the expression data from which SyNet is inferred changes in each classification fold , the signature stability for SyNet is markedly better than for generic or tissue-specific networks that use a fixed set of genes across folds ., Therefore , our results demonstrate that synergistic genes in SyNet truly aid the classifier to robustly select signatures across independent studies ., The ultimate goal of employing NOPs compared to classical models that do not use network information is to improve prognosis prediction by harnessing the information contained in the links of the given network ., Therefore , we next aimed to assess to what extent also connections between the genes , as captured in SyNet and other networks , can help NOPs to improve their performance beyond what is achievable using individual genes ., As before , we utilized identical datasets ( in terms of genes , training and test samples ) in inner and outer cross-validation loops to train all four NOPs as well as the baseline model which uses Lasso trained using all genes ( n = 11748 ) ., Our results presented in Fig 4a , clearly demonstrate that compared to other NOPs under study , GL guided by SyNet achieves superior prognostic prediction for unseen patients selected from an independent cohort ., To confirm that NOP performance using SyNet is the result of the network structure , we also applied the GL to a shuffled version of SyNet ( Fig 4a ) ., We observe a substantial deterioration of the AUC , supporting the conclusion that not only the genes , but also links contained in SyNet are important to achieve good prediction ., Moreover , this observation rules out that the GL by itself is able to provide enhanced performance compared to standard Lasso ., The result of a similar assessment for the Corr network is given in S12 ., Additionally , we found that SyNet remains predictive even when the dataset is down sampled to 25% of samples ( see S13 for details ) ., We also evaluated a recently developed set of subtype-specific networks for breast cancer 52 and found that SyNet markedly outperforms these networks in predictive performance ( see S18 for details ) ., We next assessed the performance gain of the network-guided model compared to a Lasso model that cannot exploit network information ., To this end , the GL was trained based on each network whereas the Lasso is was trained based on the genes present in the network ., Fig 4b demonstrates the results of this analysis ., We find that the largest gain in GL performance is achieved when using SyNet ( Fig 4b , x-axis ) , indicating that the links between genes in SyNet truly aid classification performance beyond what is obtained as a result of the feature selection capabilities of Lasso ., Fig 4c provides the Kaplan-Meier plot when each patient is assigned to a good or poor prognostic class according to frequency of predicted prognosis across 10 repeats ( ties are broken by random assignment to one of the classes ) for Lasso as well as Group Lasso ., Result of this analysis suggests that superior performance of the GL compared to the Lasso is mostly stemming from GLs ability to better discern the patients with poor prognosis ., An important property of an outcome predictor is to exhibit constant performance irrespective of the dataset used for training the model ( i . e . performance stability ) ., This is a highly desirable quality , as concerns have been raised regarding the highly variable performances of breast-cancer classifiers applied to different cohorts 7 , 53 ., To measure performance stability , we calculated the standard deviation of the AUC for Lasso and GL ., The y-axis in Fig 4b represents the average difference of standard deviation for Lasso and GL across all evaluated folds and repeats ( 14 folds and 10 repeats ) ., Based on this figure , we conclude that a NOP model guided by SyNet not only provides superior overall performance , it also offers improved stability of the classification performance ., Finally , we investigated the importance of hub genes in SyNet ( genes with >4 neighbors ) and observe that a comparable performance can be obtained with a network consisting of hub genes exclusively at the cost of reduced performance stability ( see S14 for details ) ., Moreover , we did not observe performance gain for a model that is governed by combined links from multiple networks ( either by intersection or unification , see S15 for details ) ., We further confirmed that the performance gain of the network-guided GL is preserved when networks are restricted to have equal number of links ( see S7 for details ) , or when links with lower confidence are included in the network ( see S16 for details ) ., We also considered the more complex Sparse Group Lasso ( SGL ) , which offers an additional level of regularization ( see S1 Text for details ) ., No substantial difference between GL and SGL performance was found ( see S8 for details ) ., Likewise , we did not observe substantial performance differences when the number of genes , group size and regularization parameters were simultaneously optimized in a grid search ( see S9 for details ) ., Together , these findings can be considered as the first unbiased evidence of true classification performance improvement in terms of average AUC and classification stability by a NOP ., Many curated biological networks suffer from an intrinsic bias since genes with well-known roles are the subject of more experiments and thus get more extensively and accurately annotated 54 ., Post-hoc interpretation of the features used by NOPs , often by means of an enrichment analysis , will therefore be affected by the same bias ., SyNet does not suffer from such bias , as its inference is purely data driven ., Moreover , since SyNet is built based on gene pairs that contribute to the prediction of clinical outcome , we expect that the genes included in SyNet not only relate to breast cancer; they should play a role in determining how aggressively the tumor behaves , how advanced the disease is or how well it responds to treatment ., To investigate the relevance of genes contained in SyNet in the development of breast cancer and , more importantly , clinical outcome , we ranked all pairs according to their median Fitness ( Fij ) across 14 studies and selected the top 300 genes ( encompassing 3544 links ) ., This cutoff was frequently chosen by the GL as the optimal number of genes in SyNet ( see section \u201cSyNet improves NOP performance\u201d ) ., Fig 5 visualizes this network revealing three main subnetworks and a few isolated gene pairs ., We performed functional enrichment for all genes as well as for the subcomponents of the three large subnetworks in SyNet using Ingenuity Pathway Analysis ( IPA ) 55 ., IPA reveals that out of 300 genes in SyNet , 287 genes have a known relation to cancer ( 2e-06