diff --git "a/laysummary/test_data.json" "b/laysummary/test_data.json" new file mode 100644--- /dev/null +++ "b/laysummary/test_data.json" @@ -0,0 +1,16 @@ +{"Unnamed: 0":2435,"id":"journal.pcbi.1002614","year":2012,"title":"Literature Based Drug Interaction Prediction with Clinical Assessment Using Electronic Medical Records: Novel Myopathy Associated Drug Interactions","sections":"Drug-drug interactions ( DDIs ) are a major cause of morbidity and mortality and lead to increased health care costs 1\u20133 ., DDIs are responsible for nearly 3% of all hospital admissions 4 and 4 . 8% of admissions in the elderly 1 ., And with new drugs entering the market at a rapid pace ( 35 novel drugs approved by the FDA in 2011 ) , identification of new clinically significant drug interactions is essential ., DDIs are also a common cause of medical errors , representing 3% to 5% of all inpatient medication errors 5 ., These numbers may actually underestimate the true public health burden of drug interactions as they reflect only well-established DDIs ., Several methodological approaches are currently used to identify and characterize new DDIs ., In vitro pharmacology experiments use intact cells ( e . g . hepatocytes ) , microsomal protein fractions , or recombinant systems to investigate drug interaction mechanisms ., The FDA provides comprehensive recommendations for in vitro study designs , including recommended probe substrates and inhibitors for various metabolism enzymes and transporters 6 ., The drug interaction mechanisms and parameters obtained from these in vitro experiments can be extrapolated to predict in vivo changes in drug exposure ., For example , a physiologically based pharmacokinetics model was developed to predict the clinical effect of mechanism based inhibition of CYP3A by clarithromycin from in vitro data 7 ., However , in vitro experiments alone often cannot determine whether a given drug interaction will affect drug efficacy or lead to a clinically significant adverse drug reaction ( ADR ) ., In vivo clinical pharmacology studies utilize either randomized or cross-over designs to evaluate the effect on an interaction on drug exposure ., Drug exposure change serves as a biomarker for the direct DDI effect , though drug exposure change may or may not lead to clinically significant change in efficacy or ADRs ., The FDA provides well-documented guidance for conducting in vivo clinical pharmacology DDI studies 6 ., If well-established probe substrates and inhibitors are used , involvement of specific drug metabolism or transport pathway can be demonstrated by in vivo clinical studies ., For example , using selective probe substrates of OATPs ( pravastatin ) and CYP3A ( midazolam ) and probe inhibitors of OATPs ( rifampicin ) and CYP3A ( itraconazole ) , it was shown that hepatic uptake via OATPs made the dominant contribution to the hepatic clearance of atorvastatin in an in vivo clinical PK study 8 ., However , due to overlap in substrate selectivity , an in vivo DDI study alone will often not provide mechanistic insight into the DDI ., Finally , in populo pharmacoepidemiology studies use a population-based approach to investigate the effect of a DDI on drug efficacy and ADRs ., For example , the interactions between warfarin and several antibiotics were evaluated for increased risk of gastrointestinal bleeding and hospitalization in a series of case-control and case-crossover studies using US Medicaid data 9 ., Indeed , epidemiological studies using large clinical datasets can identify potentially interacting drugs within a population , but these studies alone are insufficient to characterize pharmacologic mechanisms or patient-level physiologic effects ., The aforementioned in vitro , in vivo , and in populo research methods are complementary in characterizing new drug-drug interactions ., Yet these methods are all limited by their relatively small scale ., Such studies usually focus on a few drug pairs for one or a limited number of metabolizing enzymes or transporters a time ., Performing large scale screening for novel drug interactions requires higher throughput strategies ., Literature mining and data mining have become powerful tools for knowledge discovery in biomedical informatics , and are particularly useful for hypothesis generation ., A recent notable example in clinical pharmacology is the successful detection of novel DDIs through mining of the FDAs Adverse Event Reporting System 10 ., In this study , pravastatin and paroxetine were found to have a synergistic effect on increasing blood glucose ., This finding was validated in three large electronic medical record ( EMR ) databases ., While a ground-breaking success , this approach provides little evidence regarding the mechanism of the interaction ., In this paper , we present a novel approach using literature mining for screening of potential DDIs based on mechanistic properties , followed by EMR-based validation to identify those interactions that are clinically significant ., We focus on clinically and statistically significant DDIs that increase the risk of myopathy ., Our initial drug dictionary consisted of 6937 drugs ., Of these , 1492 drugs were validated as FDA approved drugs ( Figure 1 ) ., Among these 1492 drugs , our text mining approach identified 232 drugs , as either CYP substrates or inhibitors ( Table S1 ) ., Recall rate ( i . e . the proportion of true positives identified by the text mining method among all the true positives ) and accuracy ( i . e . the proportion of true positives among the text mined results ) were used to evaluate the text mining performance ., The recall rate of this text mining analysis was 0 . 97 , with the information retrieval ( IR ) step being rate-limiting ., In the information extraction ( IE ) step , the two initial curators agreed on 78% of cases ., The third curator was able to establish DDI relevance and extract information in the 22% of cases which were in disagreement ., The third curator also confirmed 100% accuracy among 20% of randomly chosen abstracts that the first two curators had agreed upon ., Therefore , the accuracy of our text mining analysis reached 100% ., These drugs metabolism and inhibition enzymes were experimentally determined by probe substrates and inhibitors recommended by the FDA Drug-Drug Interaction guidelines ., Their categorizations are reported in Table S1 ., Out of the 149 CYP substrates identified , 102 ( 68% ) , were substrates of CYP3A4\/5 ., This was consistent with the literature that about half of the drugs on the market which undergo metabolism are metabolized by CYP3A 11 ., A total of 59 drugs were found to undergo metabolism by more than one CYP enzyme ., We also identified 123 CYP inhibitors , with CYP3A4\/5 , CYP2D6 , CYP2C9 , CYP1A2 , and CYP2C19 having comparable numbers of inhibitors , ( 48 , 39 , 39 , 39 , 31 respectively ) ., Fewer inhibitors were identified for other enzymes ., Fifty inhibitors were found to inhibit more than one enzyme ., Among 232 drugs with known metabolism and\/or inhibition enzyme information ( Figure 1 ) , 13 , 197 drug interaction pairs were predicted based on their pertinent CYP enzymes ( Figure 2 ) ., Among these 13 , 197 predicted DDIs , 3670 DDI pairs were prescribed as co-medications in actual patients within the Common Data Model ( CDM ) dataset ., In other words , these 3670 predicted DDI pairs may have potential real-world clinical implication ., Among those 3670 predicted DDI pairs from in vitro studies , text mining identified 196 pairs with published clinical drug-drug interaction study results ., These in vivo studies tested whether a substrate drugs exposure ( i . e . systemic drug concentration ) was increased when co-administrating with an inhibitor ., The recall rate of this text mining analysis was 0 . 94 ., The accuracy of this text mining analysis reached 100% , after manual IE from two curators and validation from the third ., Among these 196 in vivo validated DDI pairs , 123 of them were found to have significant DDIs ( Figure 2 ) , i . e . drug exposure increased significantly ( P<0 . 05 ) , and it increased by more than 2 fold ., The additional 73 pairs were considered not to be clinically significant DDIs ., In our CDM dataset , there were medication records on 817 , 059 patients ., Among these patients , 59 , 572 ( 7 . 2% ) experienced myopathy events ( Table 1 ) ., Two major subcategories of myopathy: myalgia and myositis\/muscle weakness accounted for more than 95% of the cases ., There were 53 rhabdomyolysis cases ., In the cohort of individuals suffering a myopathy event , the average age was 40 . 2 year ( SD\\u200a=\\u200a23 years ) ; 59 . 1% were female , and the average medication frequency was 3 . 8 ( SD\\u200a=\\u200a2 . 5 ) ., However , 65 . 8% of the race data were missing ., In our initial data analysis , we found that females had higher myopathy risk than males ( 8 . 6% vs 5 . 4% , p<2e-16 , Table 2 ) ; and each one year increase in age was associated with 0 . 15% higher myopathy risk ( p<2e-16 ) ., These results were consistent with the literature 12 ., The 3670 DDI pairs identified in the CDM database were tested using the additive model , i . e . whether an inhibitor would increase the myopathy risk of the substrate compared to the substrate alone ., Both age and sex were justified in the logistic regression ., The p-value threshold was chosen as 0 . 05\/3670\\u200a=\\u200a0 . 0000136 after Bonferroni justification , with OR greater than 1 ., There were 124 and 287 significant DDI pairs for CYP2D6 and CYP3A4\/5 enzymes , respectively ( Figure 3 and Table S2 ) ., The other enzymes had fewer significant DDI pairs ., Pathway enrichment analysis suggested similar results , i . e . CYP2D6 and CYP3A4\/5 enzymes had more significant DDI pairs than the other enzymes , p\\u200a=\\u200a8E-8 and 4E-2 respectively ., Although this DDI analysis was confounded by the other co-medication variables , it was indeed a global description of DDI effects from various CYP enzymes ., This global analysis provided us a picture of the metabolism enzymes that were most important in understanding the increased myopathy risk associated with DDIs ., In order to remove the effect of myopathy risk of the inhibitor itself , a synergistic DDI test was conducted to determine whether substrate and inhibitor together have higher risk than the combined additive risk when the substrate or inhibitor is taken alone ., Both age and sex were justified as covariates ., DDI pairs were removed if either one of the drugs was prescribed to treat symptoms of myopathy ., We set the significance threshold as p\\u200a=\\u200a0 . 0000136 , as justified the multiple primary hypotheses on 3670 predicted DDI pairs ., Table 3 presents the five significant synergistic DDI pairs: ( loratadine , simvastatin ) , ( loratadine , alprazolam ) , ( loratadine , duloxetine ) , ( loratadine , ropinirole ) , and ( promethazine , tegaserod ) ., Their relative risks were ( 1 . 69 , 1 . 86 , 1 . 94 , 3 . 21 , 3 . 00 ) respectively , the p-values were ( 2 . 03E-07 , 2 . 44E-08 , 5 . 60E-07 , 2 . 60E-07 , 2 . 60E-07 , 8 . 22E-07 ) respectively , and their associated enzymes were primarily CYP3A4\/5 and CYP2D6 ., Additional analyses of myopathy were performed for these five DDI pairs ., In the first myopathy analysis , the total number of medications ordered during the drug exposure window was added as a covariate in the logistic regression ., This variable was used as a surrogate marker for the comorbidities of a patient ., The average number of medications used by individuals during the drug exposure window was 3 . 6 with SD\\u200a=\\u200a2 . 4 ., Table 4 presents the five DDI effects on myopathy after adjusting for the total number of medications ., Compared to table 3 , all the single drug myopathy risks and drug combination risks were reduced after justifying for the number of co-medications ., The DDI evidence became even more significant ( p-values less than 3e-12 ) , and risk ratios became even bigger , between 2 . 72 and 7 . 00 ., The medication frequency itself was also associated with increased myopthay risk ., The addition of one co-medication was associated with an increased myopathy risk between 0 . 6% and 0 . 9% in testing the 5 DDI pairs ., All p-values are less than 2e-16 ., In the second myopathy analysis , only the first myopathy events were considered , because co-medications administered after the first myopathy event but before the follow-up myopathy events were potential confounders ., In other words , it was difficult to justify whether the co-medication drug exposure resulted from the myopathy or caused myopathy ., Table S3 presents the data analysis for the DDI pairs: ( loratadine , simvastatin ) , ( loratadine , alprazolam ) , ( loratadine , ropinirole ) , ( loratadine , duloxetine ) , and ( promethazine , tegaserod ) ., Their relative risks are ( 1 . 34 , 1 . 38 , 1 . 38 , 1 . 81 , 1 . 70 ) respectively , the p-values are ( 3 . 20E-03 , 2 . 1E-05 , 9 . 4E-04 , 3 . 1E-03 , 2 . 3E-03 ) respectively ., This analysis based on first myopathy event with these five selected DDI pairs confirmed the trend of our previous synergistic DDI analysis ., Unlike DDI signal detection from AERS by Dr . Altmans group 10 , we enriched our EMR signal detection by focusing on CYP-mediated DDIs that were mined and predicted from PubMed abstracts ., There are multiple recent publications on drug interaction text mining ., Two automatic literature mining systems were developed to predict drug interactions based on their associated metabolism enzymes 13 , 14 ., An evidential approach was developed to differentiate in vitro and in vivo DDI studies , curate drug metabolism and inhibition enzymes , and predict DDIs based on their pertinent enzymes 15 ., Our text mining approach took advantage of these two methods , i . e . metabolism based DDI prediction; and emphasized the text mining performance more stringently ., The IR step of our method is an automatic algorithm , which has high recall rate ( 0 . 97 ) ; while the IE step is a manual curation step , with high precision ( 100% ) ., In addition , we implemented CYP enzyme probe substrates and inhibitors from the FDA guidance into the literature mining method ., This strategy supplies information on the potential mechanism for the predicted DDIs ., Our current text mining method focuses on pharmacokinetic-based drug interaction literature , i . e . reported substrate drug exposure changed by drug interaction ., Text mining which focuses on pharmacodynamics ( PD ) DDI literature has been recently discussed 16 , 17 ., PD DDI literature reports the drug efficacy or side-effect changes , but it usually does not report drug exposure change ., Among the 13197 predicted DDIs from in vitro PK study literature mining , 3670 of them may have clinical relevance , i . e . they were taken as co-medications by at least some of the 2 . 2 million patients in our clinical dataset ., However , only 196 of them ( 5 . 3% ) have been tested in clinical pharmacokinetic DDI trials ., Among these 196 clinically tested DDIs , 123 of them ( 62 . 7% ) showed significant substrate drug exposure increase when co-administrated with the inhibitor ., This striking finding calls for further evaluation of those predicted DDIs that have not been subjected to rigorous study ., As a matter of fact , all five DDI pairs which showed an increased myopathy risk in our pharmaco-epidemiology study lack clinical pharmacokinetic studies ., The FDA labels of all 7 of the drugs which comprise the five significant DDI pairs report myopathy related side effects ( Table S4 ) ., This evidence confirms the myopathy risk for each individual drug ., In order to understand the mechanisms of each interaction , we further explored literature regarding those agents ., In Figure 4 and Table S5 , we integrated information on the metabolism and inhibition enzymes of those 7 drugs from a full-text based literature review of reported in vitro studies of the drugs ., Table 5 presented the DDI potency prediction for the five DDI pairs ., Loratadine ( substrate ) and simvastatin ( inhibitor ) were predicted to have a strong DDI through the CYP3A4\/5 enzyme ., Tegaserod ( substrate and inhibitor ) and promethazine ( substrate and inhibitor ) were predicted to have strong DDI through the CYP2D6 enzyme ., Their interactions are mixed inhibition and auto-inhibition ., The other four drug pairs were predicted to have moderate DDIs: loratadine ( inhibitor ) and omeprazole ( substrate ) interact through both the CYP2C19 and CYP3A4\/5 enzymes; loratadine ( inhibitor ) and alprazolam ( substrate ) interact through CYP3A4\/5; loratadine ( substrate ) and duloxetine ( inhibitor ) interact through the CYP2D6 enzyme; and loratadine ( inhibitor ) and ropinirole ( substrate ) interaction is through CYP3A4\/5 ., Two DDI data analysis strategies were implemented to identify drug-drug interactions associated with an increased risk for myopathy ., The first approach employed an additive model coupled with a CYP metabolism pathway enrichment analysis ., This strategy stems from the newly formed discovery nature of bioinformatics research , i . e . to search for commonality among many hypothesis tests ., The second strategy employed a synergistic model coupled with extensive confounder justification ., This strategy follows the more stringent pharmaco-epidemiology considerations , which heavily controls for false positives ., Unlike the additive model , the synergistic model can justify the myopathic risk effect from an inhibitor in the presence of other potential confounders ., Therefore , the additive model would potentially identify more false positive DDIs ., However , the additive model is more powerful than the synergistic model in identifying the true positive DDIs ., Many more DDIs were identified by the additive model based DDI analysis than by the synergistic strategy ., Because pathway enrichment analysis allows more flexibility toward false positive DDIs , the additive model identified CYP3A4\/5 and CYP2D6 enzymes as they have the enriched DDI pairs ., Although the synergistic model DDI analysis only inferred five significant DDI pairs , upon additional literature review , it was found that these pairs also showed mechanistic involvement of CYP2D6 and CYP3A4\/5 enzymes ., The consistency of the mechanistic interpretations of the two separate DDI analysis strategies delivers an encouraging message: the bioinformatics approach and the pharamco-epidemiology approach are complementary and mutually supportive ., Our synergistic DDI test is a very stringent approach , compared to the additive approach used by the other investigators 9 , 18 , 19 ., We recognize that our synergistic DDI test may exclude some true DDIs ., It assumes that all myopathy is the result of drug administration , and patients who dont take the DDI drugs wont have myopathy ., However , there is a background rate of myopathy in patients that is not due to either of the two drugs in a specific DDI ., If the patients who dont take drugs have a baseline risk of myopathy , the relative risk estimated through our synergistic DDI test will be smaller than the true relative risk ., In our follow-up sensitivity analysis , medication frequency was justified in the DDI analysis ., This factor would also account for a portion of baseline myopathy risk ., Another potential approach to estimate the baseline myopathy risk is to identify a control patient group that matches the demographics , co-morbidity , and co-medication distributions of the group exposed to the DDIs ., This approach deserves further investigation ., Like many pharmaco-epidemiology studies using observational data , our analysis of the DDI effect on myopathy has several limitations ., Creating an accurate phenotypic definition using billing codes may be unreliable , with both false-positives and false-negatives likely to occur ., Our dataset also lacked clinical notes from which more detailed symptom data could be extracted ., Further research including validation with manual chart review is necessary to establish optimal phenotypic definitions for myopathy , as well as more granular definitions for myotoxicity and rhabdomyolysis ., Further research including validation with manual chart review is necessary to establish optimal phenotypic definitions for myopathy , as well as more granular definitions for myotoxicity and rhabdomyolysis using a combination of ICD9 codes , lab tests , and clinical notes ., Another limitation of our analysis is that it is subject to several potential population bias introduced by the EMR database itself ., Our retrospective observational data do not allow for controlling many potential covariates that a traditional prospective study offers ., In particular , the race data is not complete in our database ., It is also equally challenging to design a prospective study to validate our results from a pharmaco-epidemiology study ., Clinical pharmacokinetic studies or further in vitro metabolism\/inhibition studies of the selected DDI pairs found to increase myopathy may provide further validation of an interaction between the drugs ., We are also looking forward to validating our results in another large EMR database ., Our text mining and DDI prediction is CYP metabolism enzyme based ., Therefore , our interpretation of the five significant drug interactions focuses only on CYP drug-drug interaction mechanisms ., However , this does not preclude the involvement of other DDI mechanisms , such as drug transporter interactions or pharmacodynamic interactions ., In a recent GWAS study , expression of the OATP1B1 transporter was shown to predict myopathy risk associated with simvastatin 20 ., Therefore , it is possible that loratadine interacts with simvastatin through this or other transporter mechanisms ., Studies are currently underway to further characterize the mechanisms of the five identified DDIs ., The concomitant use of CYP3A metabolized statins ( atorvastatin , lovastatin and simvastatin ) with strong CYP3A inhibitiors ( e . g . ketoconazole and itraconazole ) reportedly increases risk of statin-induced myopathy ., In addition , case reports of increased myopathy in transplant recipients being treated with tacrolimus or cyclosporine 21 argue for the avoidance of this combination ., The interaction between statins and fibrates , specifically gemfibrozil , leading to increased risk of myopathy is well recognized 22 ., Gemfibrozil is a substrate of CYP3A but not a potent inhibitor ., Thus , it is likely that this interaction occurs through pharmacodynamic , not pharmacokinetic , based interactions ., Although these interactions are widely reported , we found no increased risk of myopathy with concomitant use of ketoconazole , itraconazole , tacrolimus , or gemfibrozil within the CDM database ., Their related myopathy risks of these DDIs are reported in Table 6 ., This finding is likely due to the limitation of our data analysis , in which we define concomitant drug administration by prescription orders that occur within a predefined timeframe ., As these drug interactions are well-known , it is likely that although the two drugs may have been ordered within the predetermined time window , the individual may have discontinued one medication before starting the second ., For some drugs that are used short-term , e . g . ketoconazole , it will be difficult to identify true concomitant use from prescription records ., As a matter of fact , among these statin DDI pairs in Table 6 , less than 110 patients took both drugs within the pre-defined one month interval in each pair ., This limited our power to detect significant DDIs to less than 15% , if we anticipate a 1 . 5-fold RR of DDI myopathy ., Provided that medication data in our CDM is relatively new , between 2004 and 2009 , it is likely that clinicians were aware of potential interactions and thus suggested patients avoid co-administration of these interacting drugs ., As described in the introduction , an in vitro , an in vivo , or an in populo pharmacologic study alone cannot cover the whole spectrum of mechanistic and clinically significant DDI research ., These studies usually focus on a few drug pairs for one or a limited number of metabolizing enzymes or transporters at a time ., In this paper , we combined a literature discovery approach and a large EMR database validation method for novel DDI prediction and clinical significance assessment ., The scale of our research covered all FDA approved drugs ., The literature based discovery approach predicted new DDIs and their associated CYP-mediated metabolism enzymes ., The clinical significance of these interactions was then assessed in large database of electronic medical records ., This translational bioinformatics approach successfully identified five DDI pairs associated with increased myopathy risk ., Compared to traditional in vitro , in vivo , and in populo DDI studies , our proposed translational bioinformatics approach covers a broader spectrum and identifies risk on a larger scale ., It certainly motivates more in vitro studies to investigate alternative DDI mechanisms and more clinical pharmacokinetics study to investigate the clinical significance of these DDIs ., The Indiana Network for Patient Care ( INPC ) is a heath information exchange data repository containing medical records on over 11 million patients throughout the state of Indiana ., The Common Data Model ( CDM ) is a derivation of the INPC containing coded prescription medications , diagnosis , and observation data on 2 . 2 million patients between 2004 and 2009 ., The CDM contains over 60 million drug dispensing events , 140 million patient diagnoses , and 360 million clinical observations such as laboratory values ., These data have been anonymized and architected specifically for research on adverse drug reactions through collaboration with the Observational Medical Outcomes Partnership project 23 ., This CDM model is a de-identified eletronic medical record database ., All the research work has IRB approval ., Our drug dictionary consists of 6 , 837 drugs names that include all brand\/generic\/drug group names ., They were primarily derived from DrugBank 24 ., We then excluded non-approved and experimental drugs , and focused only on FDA approved therapeutic agents , which left 1492 unique drug generic names for the mining purpose ( Figure 1 ) ., The INPC CDM data set has 54490 unique drug \u201cConcept IDs\u201d ., A Concept ID in the CDM typically maps to an RxNorm clinical drug ( e . g . , simvastatin 20 mg ) or ingredient ( simvastatin ) ., Some Concept IDs may contain multiple drug components ( e . g . , lisinopril\/hydrochlorothiazide ) ., Our drug dictionary was mapped to CDM Concept IDs using regular expression matching and manual review ., In total , 1293 unique drugs identified from DrugBank were mapped successfully , while 199 drugs could not be matched ., The unmatched drugs were categorized as follows: banned drugs , illicit drugs , organic compounds , herbicide\/insecticides , functional group derivatives , herbal extract , DrugBank drugs not covered by CDM , and literature only drug names ., In our CDM dataset , 817059 patients had medication records available ., Literature mining was conducted on 10 CYP enzymes: ( CYP1A2 , CYP2A6 , CYP2B6 , CYP2C8 , CYP2C9 , CYP2C19 , CYP2D6 , CYP2E1 , CYP3A4\/CYP3A5 ) ( Figure 5 ) ., Please note that these CYPs cover all the major ones , but not all of the CYPs ., A probe substrate of enzyme E is defined as being selectively metabolized by enzyme E; while a probe inhibitor of enzyme E selectively inhibits enzyme Es metabolism activity ., CYP probe drugs and inhibitors for the DDI text mining approach were selected as those drugs well-established as probes or inhibitors by DDI researchers and defined in the FDA guidance 6 ., The in vitro CYP enzyme substrate and inhibitor text mining and the DDI prediction was divided into the following steps ., In vivo DDI text mining was conducted on those predicted DDI pairs from in vitro DDI text mining ( Figure s1 ) ., It is broken down the following steps ., Demographic variables , age and sex , were justified in the DDI association analyses ., The total number of different medications ordered during the one month drug exposure window was used as a covariate in the logistic regression ., It serves as a surrogate of the patients overall health status , and justifies for myopathy effects from medications other than the hypothesized DDI drug pair ., It is recognized that an individual patient can experience multiple myopathy events ., Our drug-condition model considered two situations: all myopathy events and the first myopathy event ., The advantage of selecting the first myopathy event is that it is not confounded with other medications taken between the first and the follow-up myopathy events ., However , limiting the data to first myopathy even reduces the sample size , and thus the power to identify a DDI ., DDI pairs , in which at least one drug was prescribed to treat symptoms of myopathy ( e . g . narcotic and non-steroidal analgesics ) , were excluded from the DDI tests ., However , the patients prescribed these drugs are kept in the data analysis .","headings":"Introduction, Results, Discussion, Methods","abstract":"Drug-drug interactions ( DDIs ) are a common cause of adverse drug events ., In this paper , we combined a literature discovery approach with analysis of a large electronic medical record database method to predict and evaluate novel DDIs ., We predicted an initial set of 13197 potential DDIs based on substrates and inhibitors of cytochrome P450 ( CYP ) metabolism enzymes identified from published in vitro pharmacology experiments ., Using a clinical repository of over 800 , 000 patients , we narrowed this theoretical set of DDIs to 3670 drug pairs actually taken by patients ., Finally , we sought to identify novel combinations that synergistically increased the risk of myopathy ., Five pairs were identified with their p-values less than 1E-06: loratadine and simvastatin ( relative risk or RR\\u200a=\\u200a1 . 69 ) ; loratadine and alprazolam ( RR\\u200a=\\u200a1 . 86 ) ; loratadine and duloxetine ( RR\\u200a=\\u200a1 . 94 ) ; loratadine and ropinirole ( RR\\u200a=\\u200a3 . 21 ) ; and promethazine and tegaserod ( RR\\u200a=\\u200a3 . 00 ) ., When taken together , each drug pair showed a significantly increased risk of myopathy when compared to the expected additive myopathy risk from taking either of the drugs alone ., Based on additional literature data on in vitro drug metabolism and inhibition potency , loratadine and simvastatin and tegaserod and promethazine were predicted to have a strong DDI through the CYP3A4 and CYP2D6 enzymes , respectively ., This new translational biomedical informatics approach supports not only detection of new clinically significant DDI signals , but also evaluation of their potential molecular mechanisms .","summary":"Drug-drug interactions are a common cause of adverse drug events ., In this paper , we developed an automated search algorithm which can predict new drug interactions based on published literature ., Using a large electronic medical record database , we then analyzed the correlation between concurrent use of these potentially interacting drugs and the incidence of myopathy as an adverse drug event ., Myopathy comprises a range of musculoskeletal conditions including muscle pain , weakness , and tissue breakdown ( rhabdomyolysis ) ., Our statistical analysis identified 5 drug interaction pairs: ( loratadine , simvastatin ) , ( loratadine , alprazolam ) , ( loratadine , duloxetine ) , ( loratadine , ropinirole ) , and ( promethazine , tegaserod ) ., When taken together , each drug pair showed a significantly increased risk of myopathy when compared to the expected additive myopathy risk from taking either of the drugs alone ., Further investigation suggests that two major drug metabolism proteins , CYP2D6 and CYP3A4 , are involved with these five drug pairs interactions ., Overall , our method is robust in that it can incorporate all published literature , all FDA approved drugs , and very large clinical datasets to generate predictions of clinically significant interactions ., The interactions can then be further validated in future cell-based experiments and\/or clinical studies .","keywords":"medicine, clinical pharmacology, pharmacoepidemiology, drugs and devices, statistics, text mining, drug information, mathematics, pharmacology, biostatistics, information technology, drug metabolism, adverse reactions, pharmacokinetics, computer science, natural language processing, drug interactions, statistical methods","toc":null} +{"Unnamed: 0":2026,"id":"journal.pgen.1000464","year":2009,"title":"Epistatic Module Detection for Case-Control Studies: A Bayesian Model with a Gibbs Sampling Strategy","sections":"With the development of modern human and medical genetics , it has been widely accepted that genetic variation plays an important role in the pathogenesis of genetic inherited diseases 1 ., The identification of causative genetic variants therefore becomes the primary step towards the understanding of genetic principles underlying these diseases ., For Mendelian diseases in which an individual genetic variant in a single gene is both sufficient and necessary to cause a disease , classical statistical approaches such as linkage analysis 2\u20135 and association studies 6 , 7 have shown remarkable successes in the identification of causative genetic variants ., Nevertheless most common diseases are complex ones that are supposed to be caused by multiple genetic variants , their interactive effects , and\/or their interaction with environment factors 7 , 8 ., The detection of such interactive effects therefore plays a key role in the understanding of these diseases ., The interactive effects of multiple genetic variants underlying complex diseases are often referred to as epistasis or epistatic interactions ., Recent advances in biomedical studies have been confirming the contribution of epistasis to complex diseases ., For example , Tiret et al reported synergistic effects of polymorphisms in the angiotensin-converting enzyme and the angiotensin-II type 1 receptor gene on the risk of myocardial infarction 9 ., Ritchie et al identified the association of a high-order interaction among four polymorphisms in three estrogen-metabolism genes with breast cancer 10 ., Williams et al reported the influence of a two-locus interaction between polymorphisms in the angiotensin converting enzyme and the G protein-coupled receptor kinase on hypertension susceptibility 11 ., Tsai et al identified the association of a three-locus interaction among polymorphisms in renin-angiotensin system genes with atrial fibrillation 12 ., Cho et al reported the association of a two-locus interaction between polymorphisms in the uncoupling protein 2 gene and the peroxisome proliferator-activated receptor gamma gene with Type 2 diabetes mellitus 13 ., Martin et al reported the influence of a two-locus interaction between polymorphisms in KIR3DL1 and HLA-B on both AIDS progression and plasma HIV RNA 14 ., With these examples , epistasis between multiple genetic variants is now widely believed to be the causative pattern of human complex diseases ., In order to detect epistasis , a number of multi-locus approaches have been developed ., For example , Hoh et al proposed a trimming , weighting , and grouping approach that used the summation of statistics on the basis of single-locus marginal effects and the Hardy-Weinberg equilibrium ( HWE ) for hypothesis testing 15 ., Nelson et al proposed a combinatorial partitioning method ( CPM ) that exhaustively searched for a combinatory genotype group that had the most significant difference in the mean of the responding continuous phenotype 16 ., Culverhouse et al proposed a restricted partitioning method ( RPM ) which modified CPM by ignoring partitions that combined individual genotypes with very different mean trait values 17 ., Millstein et al proposed a focused interaction testing framework ( FITF ) in which a prescreening strategy was developed to reduce the number of tests 18 ., Chatterjee et al used Turkeys 1-degree-of-freedom model to detect interacting loci from different regions 19 ., Ritchie et al proposed a multifactor-dimensionality reduction ( MDR ) method in which exhaustive search was performed to detect combinations of loci with the highest classification capability 10 ., Although these methods have shown their successes in association studies for small scale candidate genes 10 , 15\u201319 , their effectiveness for large scale case-control data has not yet been validated ., Besides , most of the methods rely strongly on exhaustive search for combinations of multiple loci ., This search strategy , though feasible when the number of candidate genetic variants is small , can hardly be computationally practical for large scale or whole-genome association studies in which the number of candidate genetic variants is typically very huge ., For example , a study on Age-related Macular Degeneration ( AMD ) has genotyped more than 100 thousand single nucleotide polymorphism ( SNP ) markers for 96 patients and 50 unaffected people 20 , and a recent genome-wide association study on Parkinsons disease has genotyped more than 400 thousand SNP markers for 270 patients and 271 unaffected people 21 , 22 ., With such dense SNPs being genotyped , methods based on exhaustive search are computationally impractical due to the vast number of possible combinations of the SNP markers ., The main challenge for genome-wide association studies is therefore to design computational approaches that are capable of avoiding the \u201ccombinatorial explosion\u201d curse to identify epistatic interactions ., A recent breakthrough in genome-wide epistasis mapping is the introduction of the Bayesian epistasis association mapping ( BEAM ) method 23 that integrates a Bayesian model with the Metropolis-Hastings algorithm to infer the probability that each locus is associated with the susceptibility of a specified disease ., BEAM classifies SNP markers into three types: SNPs unassociated with the disease , SNPs contributing to the disease susceptibility independently , and SNPs influencing the disease risk jointly with each other ., However , the genetic models for complex diseases could be far more complicated than that proposed by BEAM ., For example , the disease-associated SNPs that jointly influence the disease risk may be further divided into subgroups , in which a SNP interacts with other SNPs in the same subgroup , but not with those in the other subgroups ., This situation could be very common in real data , making BEAM ineffective in the exploration of true interactive effects of multiple loci ., To overcome this limitation , in this paper , we give an explicit presentation of \u201cepistasis\u201d and define \u201cepistatic modules\u201d as basic units of disease susceptibility loci ., On the basis of this notion , we put forward a Bayesian marker partition model to explain the observed case-control data and further generalize this model to account for the existence of linkage disequilibrium ( LD ) between genetic variants ., To facilitate the identification of epistatic modules , we develop a Gibbs sampling strategy with a reversible jump Markov chain Monte Carlo ( RJ-MCMC ) procedure to simulate the posterior distribution that genetic variants belong to the epistatic modules and further resort to hypothesis testing to screen out statistically significant modules ., In contrast to most of the existing methods that entirely or partially rely on exhaustive search for combinations of loci , the proposed approach , named epiMODE ( epistatic MOdule DEtection ) , natively identifies interactive loci ( epistatic modules ) without enumerating their combinations , thereby being capable of detecting interactive effects of multiple loci from a vast number of genotyped genetic variants ., We systematically compare the proposed approach with three existing methods on seven simulated disease models ., The results show the superior performance of our approach over the other methods ., We further apply the proposed approach to a genome-wide case-control data set for Age-related Macular Degeneration ( AMD ) that contains more than 100 thousand SNPs genotyped for 96 cases and 50 controls 20 and successfully identify two SNPs that are known to be associated with the disease ., Besides , the results also suggest that two other SNPs ( rs1394608 and rs3743175 ) may have interactive effects on the susceptibility of the disease ., We also apply the proposed approach to a genome-wide case-control data set for Parkinsons disease ( 400 thousand SNPs genotyped for 270 cases and 271 controls ) 21 , 22 and identify seven SNP markers that may be associated with the disease ., The concept of epistasis implies that the phenotypic effect of one locus is dependent on one or more other loci ., Nonetheless the definitions of epistasis in biology and statistics are not exactly consistent ., Even from the statistical perspective only , researchers have different understandings of epistasis 24 , 25 ., Considering these inconsistencies , it is necessary to first give a clear definition of epistasis , for the purpose of developing a computational method for identifying multiple loci that contribute to the disease susceptibility ., In this paper , a locus stands for a SNP ., A genotype stands for a set of two alleles ( one inherited from father and the other from mother ) at a locus and has three possible values: homozygosity of common alleles , homozygosity of minor alleles , and heterozygosity ., A combinatory genotype represents the genotype of a combination of multiple loci ., For a combination of t loci , the number of all possible combinatory genotypes is 3t ., The penetrance of a combinatory genotype is the probability\/risk that an individual with this combinatory genotype is affected , given the combinatory genotype of the multiple loci ., We first assume that all loci are in linkage equilibrium , also known as independent , and then we generalize the definitions to the situation with linkage disequilibrium between multiple loci ., Let be the set of all L loci under investigation , and be the set of all s disease susceptibility loci that determine the disease risk ., For any two subsets , S1 and S2 , of S ( , , and ) , their penetrance given the combinatory genotypes and respectively , can be described aswhere represents the penetrance of a given combinatory genotype , a combinatory genotype of the multiple loci , and the function denoting how combinatory genotypes determine the disease penetrance ., For any given combinatory genotypes of S1 and S2 , ifis always true , the relationship between the two subsets of loci S1 and S2 is defined as \u201cindependently contributing\u201d to the disease ., Otherwise , the relationship between S1 and S2 is defined as \u201cepistasis . \u201d, Particularly , the relationship between a set of loci and a null set is defined as epistasis ., A set of loci is an \u201cepistatic module\u201d if and only if the relationship between S1 and its complement , , is \u201cindependently contributing , \u201d that is , for any given genotype and , and the relationship between any subset of S1 , , and its complement is epistasis ., Obviously , the set of disease susceptibility loci S consists of one or more epistatic modules ., We further verify that there is no overlap between any two epistatic modules , and epistatic modules are independent in both case and control populations ( Text S1 ) ., In genome-wide association studies where the SNPs are quite dense , it is common that a SNP may be in LD with other SNPs ., To account for this situation , we define a group of SNPs that are in LD with each other as an \u201cLD set\u201d and extend the above definition of epistatic modules by replacing individual loci with LD sets ., Note that with this extension , all properties of epistatic modules remain unchanged , as long as we treat an LD set as an individual locus in the previous derivation ., The mechanism how a number of susceptibility SNPs contribute to the disease risk through epistatic modules is shown in Figure 1 ., The disease risk is determined by a number of epistatic modules , each of which contributes to the disease independent of the others ., An epistatic module is composed of one or more susceptibility SNPs , each of which may be in LD with some other SNPs , forming an LD set ., A disease susceptibility SNP , together with the SNPs that are in LD with it , relies on other disease susceptibility SNPs or LD sets in the same epistatic module to affect the disease susceptibility ., An epistatic module cannot be further divided into smaller epistatic modules; hence epistatic modules are the smallest genetic units that independently influence the disease risk ., Suppose that in a population-based case-control study , cases and controls are genotyped at a number of L SNP markers ., The genotypes for cases and controls are represented as and , respectively , where and denote the genotypes of the i-th patient and the j-th unaffected individual at the L markers , respectively ., With the understanding of epistatic modules , the L markers can be partitioned into modules M0 , M1 , \u2026 , MS , with M0 containing markers unlinked to the disease and M1 to MS being epistatic modules ., Let ( ) be an indicator of the assignment of the i-th marker into one of the modules , and be a vector representing the assignments for all of the L markers ., Obviously , has possible values ., Let lm be the number of markers falling into the m-th module ( ) ., We have that ., Let Dm and Um be the genotypes of the sets of markers that belong to the m-th module in the case and the control populations , respectively ., Obviously , we have that when and for the case population , and when and for the control population ., With these concepts , the problem of finding markers that have epistatic interactions on the disease risk is equivalent to a problem of assigning the markers to epistatic modules ., Particularly , the assignment for a marker can be done by first calculating the probability of the observed data given a certain marker partition pattern and then obtaining the posterior probability that the marker belongs to each module using some sampling strategy ., For a clear presentation , we first derive a Bayesian model that assumes independence between SNPs and then generalize the model to account for the existence of LD sets ., The module M0 consists of markers that are unlinked to the disease ., Therefore , markers in D0 ( the case population ) should follow the same distribution as those in U0 ( the control population ) ., Let , , be the probabilities of occurrence of the three possible genotypes for the i-th marker in M0 , and be the vector that is composed of all probabilities of genotypes of the l0 markers belonging to M0 ., Let and be the number of individuals that have the k-th genotype at the i-th marker in the case and the control populations , respectively ., The joint distribution of the observed genotypes D0 and U0 , given the partition I and the parameters can then be written as ( 1 ) Following the Bayesian approach , we assume that every ( ) follows a Dirichlet distribution with the hyper-parameter , that is , ., Integrating out in Equation ( 1 ) , we obtain ( 2 ) where is the Gamma function ., For an epistatic module Mm ( ) containing lm SNPs , there are a total of combinatory genotypes ., Let and be the probabilities of occurrences of all combinatory genotypes in the case and the control populations , respectively ., Let and be the numbers of occurrences of the k-th combinatory genotype in the case and the control populations , respectively ., The distributions of Dm and Um , given the parameters and , can be written asrespectively ., Assuming that and follow Dirichlet prior distributions with hyper-parameters and , respectively , we integrate out and and obtain ( 3 ) and ( 4 ) As the distributions of Dm and Um are independent , we havePutting the above likelihood functions together , we have the posterior distribution of I , given the observed genotypes , as The prior distribution need to be determined in advance ., For simplicity , we assume that the partition of the loci are independent , and for each locus , without prior knowledge , the probability that it belongs to the m-th module is ( and ) ., With these two assumptions , we have ., Note that when prior knowledge that can be used to infer the relationship between a locus and the disease risk is available , the corresponding could be updated accordingly ., We assume that all Dirichlet hyper-parameters are equal to 0 . 5 unless otherwise specified ., We use a first-order Markov model to account for the situation in which a set of SNPs are in LD with a disease susceptibility SNP in an epistatic module , say , an LD set ., For a clear presentation , we refer to the disease susceptibility SNP as the core SNP and SNPs in LD with it as peripheral SNPs ., Given a core SNP , the likelihood of the genotypes of a peripheral SNP in the case population iswhere is the probability that the peripheral SNP has the k-th genotype conditional on that the core SNP has the j-th genotype , and is the number of cases for which the core and peripheral SNPs have the j-th and k-th genotypes , respectively ., Assuming Dirichlet priors with hyper-parameters for , we integrate out and obtain the posterior distribution of the genotypes of the peripheral SNP in the case population conditional on the core SNP as Suppose that in a module with SNPs , there are core SNPs and peripheral SNPs ( ) ., Let be the set of peripheral SNPs that are in LD with the c-th core SNP ( ) ., We have that the intersection of any two of these sets is empty , while the union of all these sets contains all peripheral SNPs ., The posterior distribution of the genotypes of the set of peripheral SNPs in the case population conditional on the c-th core SNP is given by ( 5 ) where ( , , ) are Dirichlet hyper-parameters , and is the number of cases for which the c-th core SNP has the j-th genotype , and the i-th peripheral SNP has the k-th genotypes ., Putting Equations ( 3 ) and ( 5 ) together , the likelihood of the genotypes in the case population iswhere pcore is given by Equation ( 3 ) as Similarly , by replacing the case population with the control population , the likelihood of the genotypes in the control population can be obtained as where is given by Equation ( 4 ) asand is given by ( 6 ) where ( , , ) are Dirichlet hyper-parameters , and is the number of controls for which the c-th core SNP has the j-th genotype , and the i-th peripheral SNP has the k-th genotypes ., Finally , the likelihood of observing both the case and the control populations is given by ( 7 ) We also assume that all Dirichlet hyper-parameters are equal to 0 . 5 unless otherwise specified ., The Bayesian marker partition model described above assumes independence between SNPs that are unlinked to the disease ., Nevertheless the existence of LD may make distributions of genotypes of these SNPs dependent ., In the model discussed above , there is no specific module for these linked disease-unassociated SNPs ., As a result , these SNPs could be partitioned into some epistatic modules and negatively affect the correct partition of these modules ., We therefore propose the use of LD modules to account for the existence of LD between disease-unassociated SNPs ., Although the distributions of genotypes for markers in LD are dependent in both the case and the control populations , as those for markers in epistatic modules , the underlying principle between LD markers and epistatic modules are quite different ., For LD markers , the distributions of genotypes are almost the same for the case and the control populations , while for epistatic modules the distributions of genotypes are different between the case and the control populations ., In order to incorporate this understanding into the Bayesian partition model , we assume that other than the S epistatic modules , there further exist T LD modules , labeled by {} , in each of which loci are in strong LD with each other ., We also use a first-order Markov model to account for LD between the SNPs in an LD module ., For an LD module ( ) , we assume that there exists a core SNP c , and the distributions of genotypes of all other ( peripheral ) SNPs in this LD module depend on the genotype of this core SNP ., Let be the set of the peripheral SNPs that are in LD with the core SNP ., Using similar reasoning as for the epistatic modules , we obtain that is derived with a similar way as Equation ( 2 ) and is given bywhere and are the numbers of individuals that have the k-th genotype at the core SNP in the case and the control populations , respectively ., is derived with a similar way as Equation ( 6 ) and is given bywhere are Dirichlet hyper-parameters , and and are the numbers of individuals for which the core SNP has the j-th genotype , and the i-th peripheral SNP has the k-th genotypes in the case and the control populations , respectively ., We also assume that all hyper-parameters are equal to 0 . 5 unless otherwise specified ., With LD modules being incorporated , the posterior distribution for the generalized indicator vector under the generalized Bayesian model is then The posterior distribution of the partition I given by the above Bayesian partition model suggests the following Gibbs sampler ( 8 ) where and ., In order to calculate this sampler in an efficient way , we computefor , and then obtainWith this sampler , a Gibbs sampling algorithm can be performed as follow ., In order to calculate the Gibbs sampler , i . e . , Equation ( 8 ) , we need to partition SNPs in epistatic and LD modules into core SNPs and peripheral SNPs , say , to obtain structures of the modules ., Besides , the numbers of modules ( S and T ) are also unknown ., We will address these two questions in the following two sections ., Given a set of SNPs in an epistatic module , we need to partition the SNPs into non-overlap LD sets ., For each LD set , we need to assign a core SNP ., The partition of LD sets , together with the assignment of a core SNP for each LD set , is referred to as the structure of an epistatic module ., A na\u00efve method for obtaining the structure of a module is to exhaustively search for all possible structures of the module and then select the one with the maximum likelihood ., Specifically , for an epistatic module Mm ( ) containing SNPs , there are ways for selecting the core SNPs , corresponding to the different ways of selecting non-empty subsets from the SNPs ., Furthermore , in the case that the number of core SNPs is , the number of ways for associating the rest peripheral SNPs to the core SNPs is , since each peripheral SNP can be assigned to one of the core SNPs , and the assignments are mutually independent ., Obviously , the number of all possible structures of an epistatic module grows rapidly , making the exhaustive search strategy practical only when the module contains a small number of SNPs ., We therefore propose the following sampling approach to search for a reasonable module structure when the exhaustive search strategy is hard to apply ., For an epistatic module with SNPs , in which are core SNPs , and the rest are peripheral ones , we index the core SNPs by numbers from 1 to , and we index the peripheral SNPs by numbers from to ., We further introduce an indicator vector , representing the status of all SNPs in the module ., In this vector , ( ) means that the i-th SNP is a core SNP , and means that the i-th SNP is a peripheral SNP of the k-th core SNP ., Consider a peripheral SNP indexed by i ( ) ., The posterior distribution of the indicator , given the rest of the indicators and the observation and , can be written aswhere the likelihood function can be calculated in a similar way as Equation ( 7 ) ., Assuming equal prior probabilities for all possible structures of the module , the above posterior distribution suggests the following Gibbs sampler for the peripheral SNP , ( 9 ) Consider a core SNP indexed by i ( ) ., There are two situations: ( 1 ) the core SNP has some peripheral SNPs , and ( 2 ) the core SNP has no peripheral SNPs ., In the former case , we need to fix the indicator ., In the latter case , a Gibbs sampler can be obtained as ( 10 ) where we exclude the situation in which the core SNP becomes its own peripheral SNP ., The above Gibbs samplers suggest the following sampling strategy: To further reduce the computational burden , we propose the following forward and backward strategies that are very economy in terms of computation time ., In the forward strategy , we consider three situations of adding a SNP into an existing epistatic module ., First , the SNP is itself a core SNP , and there are no other SNPs in LD with it ., Second , the SNP is in LD with an existing core SNP , and this core SNP remains unchanged ., Third , the SNP is in LD with an existing core SNP , but this core SNP needs to be updated as the added SNP ., To deal with the first case , we try to create a new LD set to include the new SNP as the core SNP in constant time complexity ., To deal with the second case , we try to add the new SNP as a peripheral SNP to every existing LD set in linear time complexity , proportional to the number of existing LD sets ., To deal with the third case , we try to add the new SNP as the core SNP and downgrade the previous core SNP to a peripheral SNP for every existing LD set in linear time complexity , also proportional to the number of existing LD sets ., Finally , we compare likelihood values of resulting structures of the above efforts and select the structure with the highest likelihood as the new module structure ., In the backward strategy , we also consider three situation of removing a SNP from an existing epistatic module ., First , the SNP is in LD with a core SNP ., Second , the SNP is itself a core SNP with no other SNPs in LD with it ., Third , the SNP is a core SNP with some other SNPs in LD with it ., The first and second cases can be dealt with in constant time complexity ., The third case can be exhaustively searched for the new core SNP in linear time complexity , proportional to the number of SNPs in LD with the removed SNP ., By comparing the likelihood values of these three cases , we can obtain a new structure for the module ., The exhaustive search strategy can provide optimal module structures , but its computation time is acceptable only when a module contains a small number of SNPs ., The sampling strategy takes uncertainty in the partitioning process into consideration and can alleviate the computational burden when a module contains a large number of SNPs ., The forward and the backward strategies can greatly reduce the computational burden and offer sub-optimal module structures ., To achieve a reasonable trade-off between the computational burden and the optimality of module structures , we also propose a hybrid strategy in which we mainly perform the forward and the backward strategies and periodically apply the exhaustive search or the sampling methods ., According to our experience , the hybrid strategy is much faster than the exhaustive search and the sampling methods and can yield similar results as the other two methods in most cases ., Therefore , we suggest the use of the hybrid strategy ., Similar to epistatic modules , we need to also assign a core SNP for each LD module ., However , the situation is quite simple for obtaining structures for LD modules , because an LD module has only one core SNP , and thus the number of possible structures for an LD module is equal to the number of SNPs in the module ., In the exhaustive search strategy , we can search for the core SNP in linear time complexity , proportional to the number of SNPs in the module ., In the forward strategy , we consider the situation of adding a SNP into an LD module , and determine the structure by comparing the likelihood values of two cases: ( 1 ) the added SNP is a peripheral SNP , and ( 2 ) the added SNP is the core SNP ., This can be done in constant time complexity ., In the backward strategy , we consider the situation of removing a SNP from the module ., If the removed SNP is not the core SNP , we simply remove it ., In the case that the deleted SNP is the core SNP , we select a new core SNP from the previous peripheral SNPs by exhaustive search , which can be done in linear time complexity , proportional to the number of SNPs remaining in the module ., Since the exhaustive search strategy is straightforward and already computationally economy ( linear complexity ) , we simply apply the exhaustive search strategy to obtain structures for LD modules ., With the module structures being obtained , we are now able to calculate the Gibbs sampler defined by Equation ( 8 ) ., We assume that the numbers of epistatic modules ( S ) and LD modules ( T ) are already known in the Gibbs sampling strategy for marker partitioning ., Nevertheless the values of S and T are usually unknown in real applications ., To address the uncertainty of S and T , we adopt a reversible jump Markov chain Monte Carlo ( RJ-MCMC ) procedure 26 as follows ., With a sufficient number of the above RJ-MCMC sampling procedure being repeated , the Markov chains for S and T could achieve their stable distributions ., In our studies , we use , , and ., The RJ-MCMC procedure samples the posterior distributions of the numbers of epistatic and LD modules , while the Gibbs sampling algorithm gives us the posterior probability that a locus belongs to a module and enables us to sample the indicators with the use of their conditional distributions in a sequential way ., Starting from an initial ( random ) assignment of the indicators , the Gibbs sampling procedure simulates a Markov chain whose stationary distribution follows the distribution of the indicator vector ., When the Markov chain reaches its stationary distribution after a number of burn-in iterations , we record candidate epistatic modules and their posterior probabilities ., The posterior probability of an epistatic module represents the strength that the module is associated with the disease and thus can be directly used to make statistical inference ., For example , biologists can select epistatic modules with top posterior probabilities for further functional analysis or biological experiments ., Nevertheless , the statistical significance of epistatic modules might be more desired by geneticists ., We therefore provide in the following parts a permutation test method and a \u201cselection-testing-correction\u201d approach for assessing the statistical significance of candidate epistatic modules ., The URL for the software presented herein is as follows: http:\/\/bioinfo . au . tsinghua . edu . cn\/epiMODE, In order to verify the capability of the proposed approach in the detection of epistatic interactions in real genome-wide association studies , we apply epiMODE to an Age-related Macular Degeneration ( AMD ) data set 20 , which contains 103 , 611 SNPs genotyped with 96 cases and 50 controls ., The authors of the original paper reported that two SNPs , rs380390 and rs1329428 , were believed to be significantly associated with AMD ., Our method successfully indentifies both of the two SNPs through the identification of an epistatic module that included these two SNPs ( two more SNPs are also indentified in the same epistatic module , and the posterior probability of the module is above 0 . 9 , see Figures 6 and 7 ) ., The nominal p-values for rs380390 and rs1329428 are 1 . 75\u00d710\u22126 and 3 . 64\u00d710\u22126 , respectively , according to the Chi-squared test with two degrees of freedom ., Our method also indentifies two novel SNPs , rs1394608 and rs3743175 , by detecting an epistatic module that includes both loci ( two more SNPs in LD with them are also indentified in the same epistatic module , and the posterior probability of the module is greater than 0 . 9 , see Figures 6 and 7 ) ., The nominal p-values for these two SNPs are 8 . 81\u00d710\u22125 and 1 . 76\u00d710\u22123 , respectively , according to the Chi-squared test with two degrees of freedom ., Note that the p-value for the combination of rs1394608 and rs3743175 is 7 . 39\u00d710\u22127 , while the p-value for the combination of rs380390 and rs1329428 is only 1 . 84\u00d710\u22125 , according to the Chi-squared test with eight degrees of freedom ., The distributions of the combination of rs1394608 and rs3743175 in cases and controls are shown in Figures 8A and 8B , respectively ., According to Chi-squared tests with four degrees of freedom , these two SNPs are independent in controls ( p-value\\u200a=\\u200a7 . 50\u00d710\u22121 ) and dependent in cases ( p-value\\u200a=\\u200a3 . 27\u00d710\u22123 ) ., We also infer the genotype frequencies of the combination of these two SNPs according to their distributions in contro","headings":"Introduction, Materials and Methods, Results, Discussion","abstract":"The detection of epistatic interactive effects of multiple genetic variants on the susceptibility of human complex diseases is a great challenge in genome-wide association studies ( GWAS ) ., Although methods have been proposed to identify such interactions , the lack of an explicit definition of epistatic effects , together with computational difficulties , makes the development of new methods indispensable ., In this paper , we introduce epistatic modules to describe epistatic interactive effects of multiple loci on diseases ., On the basis of this notion , we put forward a Bayesian marker partition model to explain observed case-control data , and we develop a Gibbs sampling strategy to facilitate the detection of epistatic modules ., Comparisons of the proposed approach with three existing methods on seven simulated disease models demonstrate the superior performance of our approach ., When applied to a genome-wide case-control data set for Age-related Macular Degeneration ( AMD ) , the proposed approach successfully identifies two known susceptible loci and suggests that a combination of two other loci\u2014one in the gene SGCD and the other in SCAPER\u2014is associated with the disease ., Further functional analysis supports the speculation that the interaction of these two genetic variants may be responsible for the susceptibility of AMD ., When applied to a genome-wide case-control data set for Parkinsons disease , the proposed method identifies seven suspicious loci that may contribute independently to the disease .","summary":"Although genome-wide association studies ( GWAS ) have been quite popular due to recent advances in low-cost genotyping techniques , most of the reported studies only analyze single-locus effects because traditional multi-locus methods are not computationally practical in the detection of epistatic interactive effects of multiple loci ., Here , on the basis of a rigorous definition of epistatic modules that describe interactive effects of multiple loci , we take advantage of a Bayesian model with a properly designed Gibbs sampling strategy to facilitate the detection of such modules ., We confirm via extensive simulation studies that the proposed method , named epiMODE , is not only feasible in detecting multi-locus effects but also more powerful than three representative methods on seven disease models ., We apply the proposed method to an Age-related Macular Degeneration ( AMD ) data and discover that a combination of two loci\u2014one in the gene SGCD and the other in SCAPER\u2014might be associated with AMD ., Considering its advantages , we suggest that the proposed method be applied to more GWAS data for the detection of multi-locus interactive effects .","keywords":"computational biology\/population genetics, genetics and genomics\/complex traits, genetics and genomics\/disease models, genetics and genomics\/genetics of disease, mathematics\/statistics, genetics and genomics\/population genetics","toc":null} +{"Unnamed: 0":1638,"id":"journal.pcbi.1006623","year":2018,"title":"Rosetta FunFolDes \u2013 A general framework for the computational design of functional proteins","sections":"Proteins are one of the main functional building blocks of the cell ., The ability to create novel proteins outside of the natural realm has opened the path towards innovative achievements , such as new pathways 1 , cellular functions 2 , and therapeutic leads 3\u20135 ., Computational protein design is the rational and structure-based approach to solve the inverse folding problem , i . e . the search for the best putative sequence capable of fitting and stabilizing a protein\u2019s three-dimensional conformation 6 ., As such , a great deal of effort has been placed into understanding the rules of protein folding and stability 7 , 8 and its relation to the appropriate sequence space 9 ., Computational protein design approaches focus on exploring two interconnected landscapes related to sampling of the conformational and sequence spaces ., Fixed backbone approaches use static protein backbone conformations , which greatly constrain the sequence space explored by the computational algorithm 9 ., Following the same principles of naturally occurring homologs , which often exhibit confined structural diversity , flexible backbone approaches enhance the sequence diversity , adding the challenge of identifying energetically favorable sequence variants that are correctly coupled to structural perturbations 10 ., Another variation for computational design approaches is de novo design , in which protein backbones are assembled in silico , followed by sequence optimization to fold into a pre-defined three-dimensional conformation without being constrained by previous sequence information 11\u201313 ., This approach tests our understanding of the rules governing the structure of different protein folds ., The failures and successes of this approach confirm and correct the principles used for the protein design process 7 , 8 ., One of the main aims of computational protein design is the rational design of functional proteins capable of carrying existing or novel functions into new structural contexts 14 ., Broadly , there are three main approaches for the design of functional proteins: redesigning of pre-existing functions , grafting of functional sites onto heterologous proteins , and designing of novel functions not found in the protein repertoire ., The redesign of a pre-existing function to alter its catalytic activity 15 or improve its binding target recognition 16 can be considered the most conservative approach ., It is typically accomplished by point mutations around the functional area of interest and tends to have little impact on global structure of the designed protein ., On the other extreme , the design of fully novel functions has most noticeably been achieved by applying chemical principles that tested our fundamental knowledge of enzyme catalysis 17 , 18 ., Between these two approaches resides protein grafting ., This method aims to repurpose natural folds as carriers for exogenous known functions ., It relies on the strong structure-function relationship present in proteins , to endow an heterologous protein with an exogenous function by means of transferring a structural motif that performs such function 3\u20135 , 19\u201322 ., At the biochemical level , grafting approaches have been used to design high binding affinity protein-protein interactions , by stabilizing binding motifs removing the entropic cost of binding ( e . g . flexible peptides ) 21 , and also by extending the binding interfaces to allow for additional energetically favorable interactions ., The extended interfaces also provide opportunities to tune the specificity of the designed proteins 21 ., On the practical side , some of the most notable applications of protein grafting thus far , have been the design of novel viral inhibitors 21 , 23 and epitope-focused immunogens for vaccine design 3\u20135 ., Following this strategy one can easily imagine applications to functionalize protein-based biomaterials 24 or to design novel biosensors 25 ., The importance of robust grafting approaches to functionalize heterologous proteins is related to the fact that the proteins that naturally perform these functions , may lack the best biochemical properties in terms of size , affinity , solubility , immunogenicity and other application specific factors ., Thus far , the most successful grafting approaches are highly dependent on structural similarity between the functional motif and the insertion region in the protein scaffold ., When the functional motif and the insertion region are identical in backbone conformation , the motif transfer can be performed by side-chain grafting , i . e . mutating the target residues into those of the functional motif 3 , 5 ., In much more challenging scenarios , full backbone grafting may be used in conjunction with directed evolution 19 ., Nevertheless , motif transfer is limited between very similar structural regions , which greatly constrains the subset of putative scaffolds that can be used for this purpose ., The lack of compatibility between the putative scaffolds and the functional sites has been referred to as a \u201cdesignability\u201d problem 26 , 27 , which refers to the likelihood of a protein backbone to host and stabilize a structural motif ., The designability problem becomes more obvious as the structural complexity of the functional motif grows , drastically limiting the types of functional motifs that can be transferred ., Previously , we have demonstrated the possibility of expanding protein grafting to scaffolds with segments that have low structural similarity ., To accomplish that task , we developed the prototype protocol Rosetta Fold From Loops ( FFL ) 4 , 21 ., The distinctive feature of our protocol is the coupling of the folding and design stages to bias the sampling towards structural conformations and sequences that stabilize the grafted functional motif ., In the past , FFL was used to obtain designs that were functional ( synthetic immunogens 4 and protein-based inhibitors 21 ) and where the experimentally determined crystal structures closely resembled the computational models ., However , the structures of the functional sites were structurally very close to the insertion segments of the hosting scaffolds ., The architecture of FFL was intrinsically limited in the types of constraints available and the grafting of linear , single segment functional motifs ., Here , we present a complete re-implementation of FFL with enhanced functionalities , simplified user interface and complete integration with other Rosetta protocols ., We have called this new , more generalist protocol Rosetta Functional Folding and Design ( FunFolDes ) ., We benchmarked FunFolDes extensively , unveiling important technical details to better exploit and expand the capabilities of the protocol ., Furthermore , we challenged FunFolDes with two design tasks of transplanting viral epitopes to heterologous scaffolds , and by doing so probe the applicability of the protocol ., The design tasks were centered on using distant structural templates as hosting scaffolds , and functionalizing a de novo designed protein , \u2014FunFolDes succeeded in both challenges ., These results are encouraging and provide a solid basis for the broad applicability of FunFolDes as a strategy for the robust computational design of functionalized proteins ., The original prototype of the Rosetta Fold From Loops ( FFL ) protocol was successfully used to transplant the structural motif of the Respiratory Syncytial Virus protein F ( RSVF ) site II neutralizing epitope into a protein scaffold in the context of a vaccine design application 4 ., FFL enabled the insertion and conformational stabilization of the structural motif into a defined protein topology by using Rosetta\u2019s fragment insertion machinery to fold an extended polypeptide chain to adopt the desired topology 28 which was then sequence designed ., Information from the scaffold structure was used to guide the folding , ensuring an overall similar topology while allowing for the conformational changes needed to stabilize the inserted structural motif ., The final implementation of FunFolDes is schematically represented in Fig 1 , and fully described in Materials and Methods ., Our upgrades to FFL focused on three main aims: I ) improve the applicability of the system to handle more complex structural motifs ( i . e . multiple discontinuous backbone segments ) ; II ) enhance the design of functional proteins by including binding partners in the simulations; III ) increase the control over each stage of the simulation improving the usability for non-experts ., These three aims were achieved through the implementation of five core technical improvements described below ., Insertion of multi-segment functional sites ., Most functional sites in proteins typically entail , at the structural level , multiple discontinuous segments , as is the case for protein-protein interfaces , enzyme active-sites , and others 29 , 30 ., FunFolDes handles functional sites with any number of discontinuous segments , ensuring the native orientations of each of the segments ., These new features enhance the types of structural motifs that can be handled by FunFolDes , widening the applicability of the computational protocol ., Structural folding and sequence design in the presence of a binding target ., Many of the functional roles of proteins in cells require physical interaction with other proteins , nucleic acids , or metabolites 31 ., The inclusion of the binder has two main advantages: I ) explicit representation of functional constraints to bias the designed protein towards a functional sequence space , resolving putative clashes derived from the template scaffold; II ) facilitate the design of new additional contact residues ( outside of the motif ) that may afford enhanced affinity and\/or specificity ., Region-specific structural constraints ., FunFolDes can collect from full-template to region-specific constraints , allowing greater levels of flexibility in areas of the scaffold that can be critical for function ( e . g . segments close to the interface of a target protein ) ., The type of distance constrains used in the protocol are soft constrains with score penalties if the defined standard deviation is exceeded in the upper and lower bounds ., Furthermore , FunFolDes is no longer limited to atom-pair distance constraints 32 and can incorporate other types of kinematic constraints , such as angle and dihedral constraints 33 , which have been used extensively to design beta-rich topologies 8 ., On-the-fly fragment picking ., Classically , fragment libraries are generated through sequence-based predictions of secondary structure and dihedral angles that rely on external computational methods 34 ., We leveraged internal functionalities in Rosetta so that FunFolDes can assemble fragment sets on-the-fly ., Using this feature , we can assemble fragment sets based on the structure of the input scaffold ., Sequence-based fragments remain an option , however this feature removes the need for secondary applications , boosting the usability of FunFolDes ., Lastly , the on-the-fly fragment picking enables the development of protocols with mutable fragment sets along the procedure ., Compatibility with other Rosetta modules ., Finally , FunFolDes is compatible with Rosetta\u2019s modular xml-interface\u2014Rosetta Scripts ( RS ) 35 ., Enabling customization of the FunFolDes protocol and , more importantly , cross-talk with other protocols and filters available through the RS interface ., We devised two benchmark scenarios to test the performance of FunFolDes ., One of these aimed to capture conformational changes in small protein domains caused by sequence insertions or deletions , and the second scenario assessed protocol performance to fold and design a binder in the presence of the binding target ., Typical protein design benchmarks are assembled by stripping native side chains from known protein structures and evaluating the sequence recovery of the design algorithm 9 ., The main design aim of FunFolDes is to insert structural motifs into protein folds while allowing flexibility across the overall structure ., This conformational freedom allows the full protein scaffold to adapt and stabilize the functional motif\u2019s conformation ., This is a main distinctive point from other approaches to design functional proteins that rely on a mostly rigid scaffold 2 , 3 , 11 , 19 , 30 , 36 ., For many modeling problems , such as protein structure prediction , protein-protein and protein-ligand docking , and protein design , standardized benchmark datasets are available 37 or easily accessible ., Devising a benchmark for designed proteins with propagating conformational changes across the structure is challenging , as we are assessing both structural accuracy as well as sequence recovery of the protocol ., To address this problem , we analyzed structural domains found repeatedly in natural proteins and clustered them according to their definition in the CATH database 38 ., As a result , we selected a set of 14 benchmark targets labeled T01 through T14 ( Fig 2A ) ., A detailed description on the construction of the benchmark can be found in the Materials and Methods section ., Briefly , for the benchmark we selected proteins with less than 100 residues , where each test case was composed of two proteins of the same CATH domain cluster ., One of the proteins is the template , and serves as a structural representative of the CATH domain ., The second protein , dubbed target , contains structural insertions or deletions ( motif region ) , to which a structural change in a different segment of the protein could be attributed ( query region ) ., The motif and query regions for all the targets are shown in Fig 2A and quantified by the percentage of overall secondary structure in S1A Fig . To a great extent , these structural changes due to natural sequence insertions and deletions are analogous to those occurring in the design scenarios for which FunFolDes was conceived ., Using FunFolDes , we folded and designed the target proteins while maintaining the motif segment structurally fixed , mimicking a structural motif insertion ., Distance constraints between residues were extracted from the template in the regions of shared structural elements of the template and the target , and were used to guide the folding simulations ., To check whether FunFolDes enhances sequence and structural sampling , we compared the simulations to constrained ab initio ( cst-ab initio ) simulations 33 ., As Rosetta conformational sampling is highly dependent upon the fragment set 39 , in this benchmark we also tested the influence of structure- and sequence-based fragments ., The performance of the two protocols was analyzed regarding global and local recovery of both structure and sequence ., Structural recovery was assessed through two main metrics:, ( a ) global RMSD of the full decoys against the target and, ( b ) local RMSD of the query region ., When evaluating the distributions for global RMSD in the designed ensembles , FunFolDes outperformed cst-ab initio by consistently producing populations of decoys with lower mean RMSD ( mostly found below 5 \u00c5 ) , a result observed in all 14 targets ( Fig 2B , S1B Fig ) ., This result is especially reassuring considering that FunFolDes simulations contain more structural information of the target topology than the cst-ab initio simulations ., The local RMSDs of the query unconstrained regions presented less clear results across the benchmark ( S1B Fig ) ., In 13 targets , FunFolDes outperformed cst-ab initio , showing lower mean RMSDs but in some targets with minor differences ., When comparing fragment sets ( structure- vs sequence-based ) , both achieved similar mean RMSDs in the decoy populations; nonetheless , the structure-based fragments more often reached the lowest RMSDs for overall and query RMSDs ( Fig 2B , S1 Fig ) ., This is consistent with what would be expected from the structural information content within each fragment set ., When paired with the technical simplicity of use , time-saving and enhanced sampling of the desired topology , the structure-based fragments are an added value for FunFolDes ., We also quantified sequence recovery , both in terms of sequence identity and similarity according to the BLOSUM62 matrix 40 ( Fig 3A ) ., In all targets , FunFolDes showed superior recoveries than cst-ab initio , and at the levels of other design protocols using Rosetta 10 ( Fig 3A ) ., This type of metrics has been shown to be highly dependent on the exact backbone conformation used as input 9 , 10 ., Given that FunFolDes is exploring larger conformational spaces , as a proxy for the quality of the sequences generated , we used the target\u2019s Hidden Markov Models ( HMM ) 41 and quantified the designed sequences that were identified as part of the target\u2019s CATH superfamily according to its HMM definition ( Fig 3B ) ., FunFolDes decoy populations systematically outperformed those from cst-ab initio ( Fig 3B ) ., The performance of the two fragment sets shows no significant differences ., In summary , this benchmark highlights the ability of FunFolDes to generate close-to-native scaffold proteins to stabilize inserted structural motifs ., FunFolDes aims to refit protein scaffolds towards the structural requirements of a functional motif ., It is thus critical , to explore within certain topological boundaries , structural variations around the original templates ., This benchmark points to several variables in the protocol that resulted in enhanced structural and sequence sampling ., The computational design of proteins that can bind with high affinity and specificity to targets of interest remains a largely unsolved problem 42 ., Within the FunFolDes conceptual approach of coupling folding with sequence design , we sought to add the structure of the binding target ( Fig 1 ) to attempt to bias sampling towards functional structural and sequence spaces ., Previously , we used FFL to design a new binder ( BINDI ) to BHRF1 ( Fig 4A ) , an Epstein-Barr virus protein with anti-apoptotic properties directly linked to the tumorigenic activity of EBV 21 ., FFL designs bound to BHRF1 with a dissociation constant ( KD ) of 58\u201360 nM , and after affinity maturation reached a KD of 220\u00b150 pM ., BINDI was designed in the absence of the target and then docked to BHRF1 through the known interaction motif ., A striking observation from the overall approach was that the FFL stage was highly inefficient , generating a large fraction of backbone conformations incompatible with the binding mode of the complex ., To test whether the presence of the target could improve structural and sequence sampling , we leveraged the structural and sequence information available for the BINDI-BHRF1 system and benchmarked FunFolDes for this design problem ., As described by Procko and colleagues , when comparing the topological template provided to FFL and the BINDI crystal structure , the last helix of the bundle ( helix 3 ) was shifted relative to the template ensuring structural compatibility between BINDI and BHRF1 ( Fig 4B ) ., We used this case study to assess the capabilities of FunFolDes to sample closer conformations to those observed in the BINDI-BHRF1 crystal structure ., In addition , we used the saturation mutagenesis data generated for BINDI 21 to evaluate the sequence space sampled by FunFolDes ., A detailed description of this benchmark can be found in the Materials and Methods section ., Briefly , we performed four different FunFolDes simulations: I ) binding target absent ( no_target ) ; II ) binding target present with no conformational freedom ( static ) ; III ) binding target present with side-chain repacking ( pack ) ; IV ) binding target present with side-chain repacking plus minimization and backbone minimization ( packmin ) ., no_target simulations generated a low number of conformations compatible with the target ( <10% of the total generated designs ) ( S2A Fig ) ., Upon global minimization more than 60% ( S2A Fig ) of the decoys were compatible with the binding target , at the cost of considerable structural drifts for both binder ( mean RMSD 3 . 3 \u00c5 ) and target ( mean RMSD 7 . 7 \u00c5 ) ( Fig 4C ) ., These structural drifts reflect the energy optimization requirements by the relaxation algorithms but are deemed biologically irrelevant due to the profound structural reconfigurations ., In contrast , simulations performed in the presence of the target clearly biased the sampling to more productive conformational spaces ., RMSD drifts upon minimization were less than 1 \u00c5 for both designs and binding target ( Fig 4C ) ., Global structural alignments of the designs fail to emphasize the differences of the helical arrangements ( S2B Fig ) ., Thus , we aligned all the designs on the conserved binding motif ( Fig 4A ) and measured the RMSD over the three helices that compose the fold ., FunFolDes simulations in the presence of the target sampled a mean RMSD of 3 \u00c5 ( lowest \u2248 2 \u00c5 ) compared to the BINDI structure ( Fig 4D ) , with the closest designs at approximately 2 \u00c5 , while the no_target simulation showed a mean RMSD of 4 . 5 \u00c5 ( lowest \u2248 2 . 5 \u00c5 ) ., While we acknowledge that these structural differences are modest , the data suggests that they can be important to sample conformations and sequences competent for binding ., We also analyzed Rosetta energy distributions of designs in the unbound state for the different simulations ., We observed noticeable differences for the designs generated in the absence ( no_target ) and the presence of the binding target , -320 and -280 Rosetta Energy Units ( REUs ) , respectively ( Fig 4E ) ., This difference is significant , particularly for a small protein ( 116 residues ) ., We also observed considerable differences for the binding energies ( \u0394\u0394G ) of the no_target and the bound simulations with mean \u0394\u0394Gs of -10 and -50 REUs , respectively ( Fig 4E ) ., The energy metrics provide interesting insights regarding the design of functional proteins ., Although the sequence and structure optimization for the designs in the absence of the target reached lower energies , these designs are structurally incompatible with the binding target and , even after refinement , their functional potential ( as assessed by the \u0394\u0394G ) is not nearly as favorable as those performed in the presence of the binding target ( Fig 4F ) ., These data suggest that , in many cases , to optimize function it may be necessary to sacrifice the overall computed energy of the protein , a common proxy to the experimental thermodynamic stability of the protein 43 ., The existence of stability-function tradeoffs has been the subject of many experimental studies 44 , 45 , however , it remains a much less explored strategy in computational design , where it may also be necessary to design proteins with lower stability to ensure that the functional requirements can be accommodated ., This observation provides a compelling argument to perform biased simulations in the presence of the binding target , which can be broadly defined as a \u201cfunctional constraint\u201d ., To evaluate sequence sampling quality , we compared the computationally designed sequences to a saturation mutagenesis dataset available for BINDI 21 ., The details of the dataset and scoring scheme can be found in the methods and S2 Fig . Briefly , point mutations beneficial to the binding affinity to BHRF1 have a positive score , deleterious mutants a negative score , and neutral score 0 ., Such a scoring scheme , will yield a score of 0 for the BINDI sequence ., Designs performed in the presence of the binding target obtained higher mean scores as compared to the no_target designs ( Fig 4G ) ., The pack simulation , showed the highest distribution mean , having one design scoring better than the BINDI sequence ., In some key positions at the protein-protein interface , the pack designs clearly outperformed those generated by the no_target simulation , when quantified by a per-position score ( Fig 4H ) ; meaning that amino-acids productive for binding interactions were sampled more often ., This benchmark provides an example of the benefits of using a \u201cfunctional constraint\u201d ( binding target ) to improve the quality of the sequences obtained by computational design ., Overall , the BINDI benchmark provided important insights regarding the best FunFolDes protocol to improve the design of functional proteins ., To further test FunFolDes\u2019s design capabilities , we sought to transplant a contiguous viral epitope that is recognized by a monoclonal antibody with high affinity ( Fig 5A ) ., For this design , we used the RSVF site II epitope ( PDB ID: 3IXT 46 ) as the functional motif ., This epitope adopts a helix-loop-helix conformation recognized by the antibody motavizumab ( mota ) 46 ., In previous work we have designed proteins with this epitope , but started from a structurally similar template , where the RMSD between the epitope and the scaffold segment was approximately 1 \u00c5 over the helical residues ., Here , we sought to challenge FunFolDes by using a distant structural template where the local RMSDs of the epitope and the segment onto which it was transplanted were higher than 2 \u00c5 ., We used MASTER 47 to perform the structural search ( detailed description in Materials and Methods ) and selected as template scaffold the structure of the A6 protein of the Antennal Chemosensory system from the moth Mamestra brassicae ( PDB ID: 1KX8 48 ) ( Fig 5A ) ., The backbone RMSD between the conformation of the epitope and the insertion region in 1kx8 is 2 . 37 \u00c5 ( Fig 5B ) ., The A6 protein is involved in chemical communication and has been shown to bind to fatty-acid molecules with hydrophobic alkyl chains composed of 12\u201318 carbons ., Two prominent features are noticeable in the structure: two disulfide bonds ( Fig 5A ) and a considerable void volume in the protein core ( S3 Fig ) , thought to be the binding site for fatty acids ., These features emphasize that the initial design template is likely not a very stable protein ., In the design process we performed two stages of FunFolDes simulations to obtain a proper insertion of the motif in the topology ( Fig 5C ) ., A detailed description of the workflow and metrics used for selection ( S3 Fig ) can be found in the Materials and Methods ., A striking feature of our designs , when compared to the starting template , is that they had a much lower void volume , showing that FunFolDes generated structures and sequences that yielded well packed structures ( S3 Fig ) ., We started by testing experimentally seven designs ., Those that expressed in bacteria were further characterized using size exclusion chromatography coupled to a multi-angle light scatter ( SEC-MALS ) to determine the solution oligomerization state ., To assess their folding and thermal stability ( Tm ) we used Circular Dichroism ( CD ) spectroscopy , and finally to assess their functional properties we used surface plasmon resonance ( SPR ) to determine binding dissociation constants ( KDs ) to the mota antibody ., Out of the seven designs , six were purified and characterized further ., The majority of the designs were monomers in solution and showed CD spectra typical of helical proteins ., Regarding , thermal stabilities we obtained designs that were not very stable and did not unfold cooperatively ( 1kx8_02 ) , however we also obtained very stable designs that did not fully unfold under high temperatures ( 1kx8_07 ) ( S4 Fig ) ., The determined binding affinities to mota ranged from 34 to 208 nM , which was an encouraging result ., Nevertheless , compared to the peptide epitope ( KD = 20 nM ) and other designs previously published ( KD = 20 pM ) 4 , there was room for improvement ., Therefore , we generated a second round of designs to attempt to improve stability and binding affinities ., Driven by the observation that the native fold has two disulfide bonds , in the second round , we tested eight designed variants with different disulfide bonds and , if necessary , additional mutations to accommodate them ., The disulfide bonded positions were selected according to the spatial orientation of residues in the designed models , with most of the disulfide bonds being placed at distal locations from the epitope ( >20 \u00c5 ) ., All eight designs were soluble after purification and two were monomeric: 1kx8_d2 and 1kx8_3_d1 , showing CD spectra typical of helical proteins ( Fig 5D ) with melting temperatures ( Tms ) of 43 and 48\u00b0C ( Fig 5E ) , respectively ., Remarkably , 1kx8_d2 showed a KD of 1 . 14 nM ( Fig 5F ) , an improvement of approximately 30-fold compared to the best variants of the first round ., 1kx8_d2 binds to mota with approximately 20-fold higher affinity than the peptide-epitope ( KD \u2248 20 nM ) , and 50-fold lower compared to previously designed scaffolds ( KD = 20 pM ) 4 ., This difference in binding is likely reflective of how challenging it can be to accomplish the repurposing of protein structures with distant structural similarity ., Post-design analyses were performed to compare the sequence and structure of the best design model with the initial template ., The global RMSD between the two structures is 2 . 25 \u00c5 ., Much of the structural variability arises from the inserted motif , while the surrounding segments adopt a configuration similar to the original template scaffold ., The sequence identity of 1kx8_d2 as compared to the native protein is approximately 13% ., The sequence conservation per-position ( Fig 5G ) was evaluated through the BLOSUM62 matrix , where positive scores are attributed to the original residue or favorable substitutions and negative if unfavorable ., Overall , 38 . 5% of the residues in 1kx8_d2 scored positively , and 61 . 4% of the residues had a score equal or lower than 0 ., This is particularly interesting , from the perspective that several residues , unfavorable according to BLOSUM62 , yielded well folded and functional proteins ., To further substantiate our experimental results , we performed structure prediction simulations of the designed sequences , where we observed that 1kx8_d2 presents a higher folding propensity than the WT protein ( S5A Fig ) ., To evaluate if the predicted models presented the correct epitope conformation , we performed docking simulations and observed that they obtained lower binding energies than the native peptide-antibody complex , within similar RMSD fluctuations ( S5A Fig ) ., The successful design of this protein is a relevant demonstration of the broad usability of FunFolDes and the overall strategy of designing functional proteins by coupled folding and design to incorporate functional motifs in unrelated protein scaffolds ., Advances in computational design methodologies have achieved remarkable results in the design of de novo protein sequences and structures 7 , 8 , 11 ., However , the majority of the designed proteins are \u201cfunctionless\u201d and were designed to test the performance of computational algorithms for structural accuracy ., Here , we sought to use one of the hallmark proteins from de novo design efforts\u2013TOP7 13 ( Fig 6A ) \u2013and functionalize it using FunFolDes ., The functional site selected to insert into TOP7 was a different viral epitope from RSVF , site IV , which is recognized by the 101F antibody 49 ., When bound to the 101F antibody , site IV adopts a \u03b2-strand-like conformation ( Fig 6B ) , which in terms of secondary structure content is compatible with one of the edge strands of the TOP7 topology ( Fig 6C ) ., Despite the secondary structure similarity , the RMSD of the site IV backbone in comparison with TOP7 is 2 . 1 \u00c5 over 7 residues , and the antibody orientation in this particular alignment reveals steric clashes with TOP7 ., Therefore , this design challenge is yet another prototypical application for FunFolDes , and we followed two distinct design routes: I ) a conservative approach where we fixed the amino-acid identities of roughly half of the core of TOP7 and allowed mutations mostly on the contacting shell of the e","headings":"Introduction, Results, Discussion, Materials and methods, Availability","abstract":"The robust computational design of functional proteins has the potential to deeply impact translational research and broaden our understanding of the determinants of protein function and stability ., The low success rates of computational design protocols and the extensive in vitro optimization often required , highlight the challenge of designing proteins that perform essential biochemical functions , such as binding or catalysis ., One of the most simplistic approaches for the design of function is to adopt functional motifs in naturally occurring proteins and transplant them to computationally designed proteins ., The structural complexity of the functional motif largely determines how readily one can find host protein structures that are \u201cdesignable\u201d , meaning that are likely to present the functional motif in the desired conformation ., One promising route to enhance the \u201cdesignability\u201d of protein structures is to allow backbone flexibility ., Here , we present a computational approach that couples conformational folding with sequence design to embed functional motifs into heterologous proteins\u2014Rosetta Functional Folding and Design ( FunFolDes ) ., We performed extensive computational benchmarks , where we observed that the enforcement of functional requirements resulted in designs distant from the global energetic minimum of the protein ., An observation consistent with several experimental studies that have revealed function-stability tradeoffs ., To test the design capabilities of FunFolDes we transplanted two viral epitopes into distant structural templates including one de novo \u201cfunctionless\u201d fold , which represent two typical challenges where the designability problem arises ., The designed proteins were experimentally characterized showing high binding affinities to monoclonal antibodies , making them valuable candidates for vaccine design endeavors ., Overall , we present an accessible strategy to repurpose old protein folds for new functions ., This may lead to important improvements on the computational design of proteins , with structurally complex functional sites , that can perform elaborate biochemical functions related to binding and catalysis .","summary":"The ability to use computational tools to manipulate the structure and function of proteins has the potential to impact many facets of fundamental and translational science ., Due to our limited understanding of the principles that govern protein function and structure , the computational design of functional proteins remains challenging ., We developed a computational protocol ( Rosetta FunFolDes ) to facilitate the insertion of functional motifs into heterologous proteins ., We performed extensive in silico benchmarks , and found that when the design of function is required the global energy minima may not be the optimal solution , in line with previously reported experimental studies ., Further , we used FunFolDes to design two novel functional proteins , displaying two viral epitopes that can be of interest for vaccine development ., The designed proteins were experimentally characterized , showing that functionalization was successfully achieved ., These results highlight the capability of FunFolDes to address common challenges on the design of functional proteins ., In particular , the reduced structural compatibility between functional sites and host scaffolds , effectively enabling the repurposing of old protein folds for new functions ., Overall , FunFolDes provides new means to accomplish the challenging task of functionalizing computationally designed proteins .","keywords":"crystal structure, markov models, engineering and technology, synthetic biology, condensed matter physics, synthetic bioengineering, mathematics, protein structure, sequence motif analysis, macromolecular design, crystallography, research and analysis methods, bioengineering, sequence analysis, solid state physics, bioinformatics, proteins, hidden markov models, biological databases, molecular biology, probability theory, physics, protein structure comparison, biochemistry, biochemical simulations, sequence databases, database and informatics methods, biology and life sciences, physical sciences, computational biology, macromolecular engineering, macromolecular structure analysis","toc":null} +{"Unnamed: 0":1078,"id":"journal.pcbi.1005893","year":2017,"title":"Non-linear auto-regressive models for cross-frequency coupling in neural time series","sections":"We note y the signal containing the high-frequency activity , and x the signal with slow frequency oscillations , also called the exogenous driver ., When a signal x results from a band-pass filtering step , we note the central frequency of the filter fx and the bandwidth \u0394fx ., The value of the signal x at time t is denoted x ( t ) ., To estimate PAC , the typical pipeline reported in the literature consists in four main processing steps: The Modulation Index ( MI ) described in the pioneering work of 6 is the mean over time of the composite signal z = a y e \u03d5 x ., The stronger the coupling between \u03d5x and ay , the more the MI deviates from zero ., This index has been further improved by Ozkurt et al . with a simple normalization 26 ., Another approach 23 , 44 has been to partition 0 , 2\u03c0 into smaller intervals to get the time points t when \u03d5x ( t ) is within each interval , and to compute the mean of ay ( t ) on these time points ., PAC was then quantified by looking at how much the distribution of ay differs from uniformity with respect to \u03d5x ., For instance , a simple height ratio 44 , or a Kullback-Leibler divergence as proposed by Tort et al . 23 , can be computed between the estimated distribution and the uniform distribution ., Alternatively , it was proposed in 11 to use direct correlation between x and ay ., As this method yielded artificially weaker coupling values when the maximum amplitude ay was not exactly on the peaks or troughs of x , this method was later extended to generalized linear models ( GLM ) using both cos ( \u03d5x ) and sin ( \u03d5x ) by Penny et al . 22 ., Other approaches employed a measure of coherence 45 or the phase-locking value 46 ., All these last three approaches offer metrics which are independent of the phase at which the maximum amplitude occurs ., The methods of Tort et al . 23 , Ozkurt et al . 26 , and Penny et al . 22 will be considered for comparison in our experiments ., As one can see , there is a long list of methods to quantify CFC in neural time series ., Yet , a number of limitations which can significantly affect the outcomes and interpretations of neuroscientific findings exist with these approaches ., For example , in typical PAC analysis , a systematic bias rises where one constructs the so-called comodulogram ., A comodulogram is obtained by evaluating the chosen metric over a grid of frequency fx and fy ., This bias emerges from the choice of the bandpass filter , which involves the critical choice of the bandwidth \u0394fy ., It has been reported several times that to observe any amplitude modulation , the bandwidth of the fast oscillation \u0394fy has to be at least twice as high as the frequency of the slow oscillations fx: \u0394fy > 2fx 27 , 47 ., As a comodulogram uses different values for fy , many studies have used a variable bandwidth , by taking a fixed number of cycles in the filters ., The bandwidth is thus proportional to the center frequency: \u0394fy \u221d fy ., This choice leads to a systematic bias , as it hides any possible coupling below the diagonal fy = 2fx\/\u03b1 , where \u03b1 = \u0394fy\/fy is the proportionality factor ., Other studies have used a constant bandwidth \u0394fy; yet this also biases the results towards the low driver frequency fx , considering that it hides any coupling with fx > \u0394fy\/2 ., A proper way to build a comodulogram would be to take a variable bandwidth \u0394fy \u221d fx , with \u0394fy > 2fx ., However , this is not common practice as it is computationally very demanding , because it implies to bandpass filter y again for each value of fx ., Another common issue arises with the use of the Hilbert transform to estimate the amplitude and the phase of real-valued signals ., Such estimations rely on the hypothesis that the signals x and y are narrow-band , i . e . almost sinusoidal ., However , numerous studies have used this technique on very wide-band signals such as the entire gamma band ( 80-150 Hz ) 6 ( see other examples in 28 ) ., The narrow-band assumption is debatable for high frequency activity and , consequently , using the Hilbert transform may yield non-meaningful amplitude estimations , and potentially poor estimations of PAC 27 , 28 ., Note also that , in this context , wavelet-based filtering is equivalent to the Hilbert transform 48 , 49 , and therefore does not provide a more valid alternative option ., Besides these issues of filtering and inappropriate use of Hilbert transforms , Hyafil 29 also warned that certain choices of bandwidth \u0394fy might mistake phase-frequency coupling for PAC , or create spurious amplitude-amplitude coupling; see also the more recent work in 24 for discussion and more practical recommendations for PAC analysis ., Here we advocate that the DAR models detailed in the next sections address a number of the limitations just mentioned ., They do not use bandpass filter or Hilbert transform on the high frequencies y ., They introduce a measure of goodness of fit , through the use of a probabilistic signal model whose quality can be assessed by evaluating the likelihood of the data under the model ., In practice , the likelihood quantifies how much variance of the signal can be explained by the model , and is similar to the R2 coefficient in generalized linear models ( GLM ) ., To the best of our knowledge , the only related model-based approach to measure PAC used GLM 22 ., With GLM , however , the modeling part is done independently on each signal yf , which is the band-pass filtered version of y around frequency f ., For each of these frequencies f a different model is fitted ., By doing so , a GLM approach cannot model the wide-band signal y as it is limited to multiple estimations frequency by frequency bin ., This largely limits the use of the likelihood to compare models or parameters ., On the contrary , we propose to model y globally , without filtering it in different frequency bands ., To conclude this section , and to position this work in the broader context of modeling approaches for neuroscience data , we would like to stress that our proposed method can be considered as an encoding model for CFC , as opposed to a decoding model 50\u201352 ., Indeed , our model reports how much empirical data can be explained and by doing so enables us to test neuroscience hypotheses in a principled manner 51 ., The literature on the use of non-linear auto-regressive ( AR ) models is quite large and covers fields such as audio signal processing and econometrics ., For instance , AR models with conditional heteroskedasticity ( ARCH 53 , GARCH 54 ) are extremely popular in econometrics where they are used to model signals whose overall amplitude varies as a function of time ., Here , however , in the context of CFC and PAC , one would like to model variations in the spectrum itself , such as shifts in peak frequencies ( a . k . a . frequency modulations ) or changes in amplitude only within certain frequency bands ( a . k . a . amplitude modulations ) ., To achieve this , one idea is to define a linear AR model , whose coefficients are a function of time and change slowly depending on a non-linear function of the signal ., The first models based on this idea are SETAR models 41 , which switch between several AR models depending on the amplitude of the signal with respect to some thresholds ., To get a smoother transition between regimes , SETAR models have inspired other models like EXPAR 55 or STAR 42 , in which the AR coefficients change continuously depending on a non-linear function of the past of the signal ., These models share the same underlying motivation as the DAR models described below but , crucially , DAR models can be designed and parametrized to capture PAC phenomena independently of the phase in the driving signal at which the high frequency content is the strongest ., In other words , DAR models can work equivalently well if the high frequency peaks are in the troughs , the rising phase , the decreasing phase or the peaks of the low frequency driving signal ., Moreover , as DAR models do not require to infer the driving behavior from the signal itself and rather rely on the prior knowledge of the slow oscillation , the inference is significantly faster and more robust ., In this section , we present the outcome of using the model selection procedure to estimate the best filtering frequency fx and bandwidth \u0394fx to extract the driver x ., We first describe the outcome on simulated signals ( ground truth ) and then on empirical datasets ., Given that DAR models are parametric with a limited number of parameters to estimate , less time samples may be needed to estimate PAC as compared to non-parametric methods ., We tested this assumption using simulated signals of varying duration ., We computed their comodulograms ( as in Fig 7 ) and selected the frequencies of maximum coupling ., For each duration , we simulated 200 signals , and plotted the 2D histogram showing the fraction of time each frequency pairs corresponded to a maximum ., We then compared the same four methods: DAR models with ( p , m ) = ( 10 , 1 ) , the GLM-based model 22 , and two non-parametric methods 23 , 26 ., Results shown in Fig 9 show that parametric approaches provided a more robust estimation of PAC frequencies with short signals ( T = 2 sec ) than non-parametric methods ., The robustness to small sample size is a key feature of parametric models , as it significantly improves PAC analysis during shorter experiments ., When undertaking a PAC analysis across time using a sliding time window , parametric models should therefore provide more robust PAC estimates ., Note that the specific time values in these simulations should not be taken as general guidelines as they depend on the simulation parameters such as the signal-to-noise ratio ., However , across all tests , parametric methods consistently provided more accurate results than non-parametric ones ., One can note that in DAR models , the driver contains not only the phase of the slow oscillation , but also its amplitude ., As the driver is not a perfect sinusoid , its amplitude fluctuates with time ., On the contrary , most PAC metrics discard the amplitude fluctuations of the slow oscillation and only consider its phase ., To evaluate these two options , we compared two drivers using DAR models: the original ( complex ) driver x ( t ) , and the normalized driver x \u02dc ( t ) = x ( t ) \/ | x ( t ) | ., This normalized driver only contains the phase information , as in most traditional PAC metrics ., Using cross-validation , we compared the log-likelihood of four fitted models , and found a difference always in favor of the non-normalized driver x ( t ) , as it can be visualized in S3 Fig . This result shows that the coupling phenomenon is associated with amplitude fluctuations , a kind of phase\/amplitude-amplitude coupling , as it was previously observed in 25 ., Indeed , the GLM parametric method 22 was improved when taking into account the amplitude of the slow oscillation ., Here , we use our generative model framework to provide an easy comparison tool through the likelihood , to validate this neuroscientific insight from the signals ., In this section , we report the results of the directionality estimates using both simulations and neurophysiological signals ., It is noteworthy that in DAR models , we arbitrarily call driver the slow oscillation although the model makes no assumption on the directionality of the coupling ., Cross-frequency coupling ( CFC ) and phase-amplitude coupling ( PAC ) more specifically have been proposed to play a fundamental role in neural processes ranging from the encoding , maintenance and retrieval of information 3\u20135 , 8 , 17 , 81 , 82 , to large-scale communication across neural ensembles 7 , 19 , 83 , 84 ., While a steady increase in observations of PAC in neural data has been seen , how to best detect and quantify such phenomena remains difficult to settle ., We argue that a method using DAR models , as described here , is rich enough to capture the time-varying statistics of brain signals in addition to provide efficient inference algorithms ., These non-linear statistical models are probabilistic , allowing the estimation of their goodness of fit to the data , and allowing for an easy and fully controlled comparison across models and parameters ., In other words , they offer a unique principled data-driven model selection approach , an estimation strategy of phase\/amplitude-amplitude coupling based on the approximation of the actual signals , a better temporal resolution of dynamic PAC and the estimation of coupling directionality ., One of the main features of PAC estimation through our method is the ability to compare models or parameters on non-synthetic data ., On the contrary , traditional PAC metrics cannot be compared on non-synthetic data , and two different choices of parameters can lead to different interpretations ., There is no legitimate way to decide which parameter shall be used with empirical data using traditional metrics ., The likelihood of the DAR model that can be estimated on left-out data offers a rigorous solution to this problem ., We presented here results on both simulated signals and empirical neurophysiological signals ., The simulations gave us an illustration of the phenomenon we want to model , and helped us understand how to visualize a fitted DAR model ., They also served a validation purpose for the bandwidth selection approach that we performed on real data ., Using the data-driven parameter selection on non-synthetic signals , we showed how to choose sensible parameters for the filtering of the slow oscillation ., All empirical signals are different , and it was for example reported in the neuroscience literature that peak frequencies vary between individuals 85 and that this should not be overlooked in the analysis of the data ., The parameter selection based on fitted DAR models makes it possible to fit parameters on individual datasets ., Our results also shed light on the asymmetrical and wide-band properties of the slow oscillation , which could denote crucial features involved in cognition 32 ., The second novelty of our method stands in considering the amplitude fluctuations of the slow oscillation in the PAC measure and not only its phase ., Using the rodent and human data , we showed that the instantaneous amplitude of the slow oscillation influences the coupling in PAC , as it was previously suggested in 25 ., The amplitude information should therefore not be discarded as it is done by existing PAC metrics ., For instance , the measure of alpha\/gamma coupling reported during rest 86 , 87 should incorporate alpha fluctuations when studied in the context of visual tasks 88 , as an increase of alpha power is often concomitant with a decrease of gamma power 89 ., The comparison between DAR models considering or not these low-frequency power fluctuations would inform on the nature of the coupling: purely phase-amplitude , or rather phase\/amplitude-amplitude ., In Tort et al . 14 , both theta power changes and modulation of theta\/gamma PAC were reported in rats having to make a left or right decision to find a reward in a maze ., The use of our method could decipher whether the changes in coupling were related to the changes in power , informing on the underlying mechanisms of decision-making ., Moreover , as our method models the entire spectrum simultaneously , a phase-frequency coupling could potentially be captured in our models ., Therefore , our method is not limited to purely phase-amplitude coupling , and extends the traditional CFC analysis ., Furthermore , in those types of experiments , changes in PAC can be very fast depending on the cognitive state of the subject ., Therefore , the need for dynamic PAC estimates is growing 14 ., We showed with simulations that DAR models are more robust than non-parametric methods when estimating PAC on small time samples ., This robustness is critical for time-limited experiments and also when analyzing PAC across time in a fine manner , typically when dynamic processes are at play ., Last but not least , likelihood comparison can also be used to estimate the delay between the coupled components , which would give new insights on highly debated questions on the role of oscillations in neuronal communication 90 , 91 ., For example , a delay close to zero could suggest that the low and high frequency components of the coupling might be generated in the same area , whereas a large delay would suggest they might come from different areas ., As an alternative interpretation , the two components may come from the same area , but the coupling mechanism itself might be lagged ., In this case , a negative delay would suggest that the low frequency oscillation is driven by the high frequency oscillations , whereas a positive delay would suggest that the low frequency oscillation drives the high frequency amplitude modulation ., In any case , this type of analysis will provide valuable information to guide further experimental questions ., A recent concern in PAC analysis is that all PAC metrics may detect a coupling even though the signal is not composed of two cross-frequency coupled oscillators 30 , 92\u201395 ., It may happen for instance with sharp slow oscillations , described in humans intracranial recordings 68 ., Sharp edges are known not to be well described by a Fourier analysis , which decomposes the signal in a linear combination of sinusoids ., Indeed , such sharp slow oscillations create artificial high frequency activity at each sharp edge , and these high frequencies are thus artificially coupled with the slow oscillations ., This false positive detection is commonly referred to as \u201cspurious\u201d coupling 96 ., Fig 12 shows a comodulogram computed on a simulated spurious PAC dataset , using a spike train at 10 Hz , as described in 94 ., The figure shows that all four methods , including the proposed one , detect some significant PAC , even though there is no nested oscillations in the signal ., Even though our method does not use filtering in the high frequencies , it does not solve this issue and is affected in the same way as other traditional PAC metrics ., Indeed , our work shed light on the wide-band property of the slow oscillations , but DAR models cannot cope with full-band slow oscillations , which contain strong harmonic components in the high frequencies ., However , we consider that such \u201cspurious\u201d PAC can also be a relevant feature of a signal , as stated in 68 ., In their study , they show that abnormal beta oscillations ( 13-30 Hz ) in the basal ganglia and motor cortex underlie some \u201cspurious\u201d PAC , but are actually a strong feature associated with Parkinson\u2019s disease ., A robust way to disentangle the different mechanisms that lead to similar PAC results remains to be developed ., The method we presented in this paper uses univariate signals obtained invasively in rodents or humans ., As a lot of neurophysiological research uses non-invasive MEG or EEG recordings containing multiple channels , a multivariate analysis could be of high interest ., One way to use data from multiple channels is to estimate a single signal using a spatial filter such as in 97 ., Such a method is therefore complementary to univariate PAC metrics like ours which can be applied to the output of the spatial filter ., The method from 97 builds spatial filters that maximize the difference between , say , high-frequency activity that appears during peaks of a low-frequency oscillation versus high-frequency activity that is unrelated to the low-frequency oscillation ., Again , from the signal obtained with the spatial filter , it is straightforward to adapt most PAC metrics such as our method ., Neurophysiological signals have all the statistical properties to make them a challenge from a signal processing perspective ., They contain non-linearities , non-stationarities , they are noisy and they can be long , hence posing important computational challenges ., Our method based on DAR models offer novel and more robust possibilities to analyse neurophysiological signals , paving the way for new insights on how our brain functions via spectral interactions using local or distant coupling mechanisms ., Inline with the open science philosophy of this journal , our method is fully available as an open source package that comes with documentation , tests , and examples: https:\/\/pactools . github . io .","headings":"Introduction, Results, Discussion","abstract":"We address the issue of reliably detecting and quantifying cross-frequency coupling ( CFC ) in neural time series ., Based on non-linear auto-regressive models , the proposed method provides a generative and parametric model of the time-varying spectral content of the signals ., As this method models the entire spectrum simultaneously , it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals ., As the model is probabilistic , it also provides a score of the model \u201cgoodness of fit\u201d via the likelihood , enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach ., Using three datasets obtained with invasive neurophysiological recordings in humans and rodents , we demonstrate that these models are able to replicate previous results obtained with other metrics , but also reveal new insights such as the influence of the amplitude of the slow oscillation ., Using simulations , we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods ., We also show how the likelihood can be used to find optimal filtering parameters , suggesting new properties on the spectrum of the driving signal , but also to estimate the optimal delay between the coupled signals , enabling a directionality estimation in the coupling .","summary":"Neural oscillations synchronize information across brain areas at various anatomical and temporal scales ., Of particular relevance , slow fluctuations of brain activity have been shown to affect high frequency neural activity , by regulating the excitability level of neural populations ., Such cross-frequency-coupling can take several forms ., In the most frequently observed type , the power of high frequency activity is time-locked to a specific phase of slow frequency oscillations , yielding phase-amplitude-coupling ( PAC ) ., Even when readily observed in neural recordings , such non-linear coupling is particularly challenging to formally characterize ., Typically , neuroscientists use band-pass filtering and Hilbert transforms with ad-hoc correlations ., Here , we explicitly address current limitations and propose an alternative probabilistic signal modeling approach , for which statistical inference is fast and well-posed ., To statistically model PAC , we propose to use non-linear auto-regressive models which estimate the spectral modulation of a signal conditionally to a driving signal ., This conditional spectral analysis enables easy model selection and clear hypothesis-testing by using the likelihood of a given model ., We demonstrate the advantage of the model-based approach on three datasets acquired in rats and in humans ., We further provide novel neuroscientific insights on previously reported PAC phenomena , capturing two mechanisms in PAC: influence of amplitude and directionality estimation .","keywords":"acoustics, medicine and health sciences, sine waves, engineering and technology, signal processing, vertebrates, electrophysiology, neuroscience, animals, mammals, signal filtering, bandwidth (signal processing), research and analysis methods, mathematical functions, mathematical and statistical techniques, physics, rodents, bandpass filters, eukaryota, speech signal processing, physiology, biology and life sciences, physical sciences, amniotes, neurophysiology, organisms, acoustic signals","toc":null} +{"Unnamed: 0":334,"id":"journal.pcbi.1006711","year":2019,"title":"A Gestalt inference model for auditory scene segregation","sections":"We live in busy environments , and our surrounds continuously flood our sensory system with complex information that needs to be analyzed in order to make sense of the world around us ., This process , labeled scene analysis , is common across all sensory modalities including vision , audition and olfaction 1 ., It refers to the ability of humans , animals and machines alike to parse the mixture of cues impinging on our senses , organize them into meaningful groups and map them onto relevant foreground and background objects ., Our brain relies on innate dispositions that aid this process and help guide the organization of patterns into perceived objects 2 ., These dispositions , referred to as Gestalt principles , inform our current understanding of the perceptual organization of scenes 3 , 4 ., In most theoretical accounts , the role of Gestalt principles in parsing a scene can be conceptualized in two stages: segregation ( or analysis ) and grouping ( or fusion ) 5 ., In the first stage , the sensory mixture is decomposed into feature elements , believed to be the building blocks of the scene ., These features reflect the physical nature of sources in the scene , the state and structure of the environment itself , as well as perceptual mappings of these attributes as viewed by the sensory system ., These features vary in complexity along a continuum from basic attributes ( e . g . edges or frequency components ) to more complex characteristics of the scene ( e . g . shapes or timbral profiles ) ., The ubiquitous nature of these profiles often conceals the multiplexed structures that underlie this analysis of scene features in the brain ., In most computational accounts , this segregation stage is modeled using feature analyses which map the sensory signal into its building blocks ranging from simple components ( e . g . frequency channels ) to dimensionally-complex kernels 6 , 7 ., Processing the distinctive features of a scene is generally followed by a fusion stage which integrates the state and behavior of the scene\u2019s building blocks using grouping mechanisms that reflect the local and global distribution and dynamics of the features ., This stage employs \u2018rules\u2019 that guide how grouped elements give rise to perceptually coherent structures forming objects or streams 2 , 8 , 9 ., In many mathematical models , these grouping cues are often leveraged in back-end classifiers that are tuned to capture patterns and relationships within specific object classes ( e . g . speech , music , faces , etc ) 10\u201313 ., In doing so , these models effectively capture the inter-dependencies between object attributes and learn their mapping onto an integrated representational space 14\u201316 ., Ultimately , success in tackling scene analysis depends on two key components 17:, ( i ) obtaining a rich and robust feature representation that can capture object specific details present in the scene;, ( ii ) grouping the feature elements such that their spatial and temporal associations match the dynamics of objects within the scene ., Vision models have been very successful in mining these two aspects of scene analysis ., Intricate hierarchical systems have leveraged inherent structure in static and dynamic images to extract increasingly elaborate features from a scene that are then used to segment it , interpret its objects or track them over time 18\u201320 ., Data-driven approaches have shown that high dimensional feature spaces are very effective in extracting meaningful semantics from arbitrary natural images 20\u201322; while hand-engineered features like scale-invariant feature transform ( SIFT ) 23 , histogram of oriented gradients ( HOG ) 24 , and Bag-of-visual-word descriptor 25 among others have also enjoyed a great deal of success in tackling computer vision problems like image classification and object detection ., Recent advances in deep layered architectures have resulted in a flurry of rich representational spaces showing selectivity to contours , corners , angles and surface boundaries in images 26\u201329 ., The deep nature of these architectures has also led to a natural evolution from low-level features to more complex , higher-level embeddings that capture scene semantics or syntax 30 , 31 ., In audition , computational approaches to tackle auditory scene organization have mostly taken advantage of physiological and perceptual underpinnings of sound processing 17 ., A large body of work has built on knowledge of the auditory pathway , particularly the peripheral system to build sophisticated analysis models of auditory scenes ., These systems extract relevant cues from a scene , such as its spectral content , spatial structure as well as temporal dynamics; hence allowing sound events with uncorrelated acoustic behavior to occupy different subspaces in the analysis stage ., These models are quite effective in replicating perceptual results of stream segregation especially using simple tone and noise stimuli 32\u201337 ., Some models also extend beyond early acoustic features to examine feature binding mechanisms that can be used as an effective strategy in segregating wide range of stimuli from simple tone sequences to spectro-temporally complex sounds like speech and music 38\u201340 ., In most approaches however , the models are built around hand-crafted feature representations , hence limiting their scope to specific mappings of the acoustic space ., With the emergence of deep belief architectures , recent efforts started learning rich feature spaces from natural soundscapes in a data driven fashion , and subsequently using these spaces in domains like music genre classification , phoneme classification and speaker identification 41\u201344 ., Applications of deep learning have also successfully tackled the problem of speech separation even with monaural inputs by learning embeddings of a speaker\u2019s time-frequency dynamics against other speakers 45 , 46 ., The current study also leverages neural network theory to \u2018learn\u2019 Gestalt principles directly from sound ., The work examines what kind of cues can one infer from natural sounds; how well do these learned cue reflect the known Gestalt components of auditory streams; and how effective are these cues in explaining perceptual organization of auditory scenes with varying degrees of complexity ., The model is devised as a hierarchical structure that generally follows the two-stage pipeline of analysis then fusion , in line with prototypical scene analysis theories 5 ., This system analyzes the incoming acoustic signal with a multitude of granularities , hence allowing both local and global acoustic attributes to emerge ., The short-term analysis performs a local tiling of the spectro-temporal space; hence inferring simultaneous grouping cues 47\u201349 ., A longer-range analysis extends the segregation stage to examine temporal dependencies across acoustic attributes over different time scales; hence exploring emergence of sequential grouping cues 50\u201354 ., Finally , a fusion stage binds the cues together based on how strongly they correlate with each other across multiple time scales ., This integration is achieved using Hebbian learning which reinforces activity across coherent channels and suppresses activity across incoherent ones 55\u201357 ., Apart from the basic layout and choice of analysis window sizes , the network is trained in an unsupervised fashion on a rich sound dataset including speech and nature sounds hence offering a general inference architecture of auditory Gestalt cues that are common across many sound environments ., The overall system is tested with a wide range of stimuli where we can quantify the role of each and every component of the network in driving stream segregation processes ., We also contrast the system performance with a set of control experiments where different components of the model are deliberately switched on\/off in order to examine their impact on the organization of different acoustic scenes ., These control experiments aim not only to dissect the role of various system components ., They also shed light on how necessary and\/or sufficient different grouping cues are to anchor the analysis of different stimuli structures and sound types ., The paper first presents an in-depth description of the proposed architecture , followed by an analysis of the emergent properties of the trained network and their potential neural correlates in the auditory pathway ., The experimental results outline how the network replicates human psychoacoustic behavior in stream segregation and speech intelligibility paradigms ., Finally , we present control experiments that dissect the network architecture and examine the contribution its component ., We discuss the implications of this network in shedding light on ties between observed perceptual performance in various complex auditory scenes and the neural underpinnings of this behavior as implemented in networks of neurons along the auditory pathway ., A number of Gestalt principles have been posited as indispensable anchors used by the brain to guide the segregation of auditory scenes into perceptually meaningful objects 8 , 47 , 58 ., These comprise a wide variety of cues; for instance harmonicity which couples harmonically-related frequency channels together , common fate which favors sound elements that co-vary in amplitude , and common onsets which groups components that share a similar starting time and to a lesser degree a common ending time ., Most of these cues are thought to be innate in our auditory system , and evidence for their role is found across many species 59\u201363 ., These processes likely take advantage of statistical regularities of sounds in natural environments and reflect the physical constraints of sound generation and propagation ( e . g . two sound sources rarely start at the exactly the same time; periodic vibrations induce resonant modes at integer multiples of the fundamental frequency ) ., Here , we examine whether a statistical inference model can learn these cues directly from natural sounds; and if so , how effective are these learned cues relative to existing hand-tailored segregation systems ., The proposed model is designed as a hierarchical system that explicitly mimics an \u2018analysis-then-fusion\u2019 processing pipeline ., The analysis stage is itself laid out in two stages ., First , an analysis of local spectrotemporal cues aims to learn simultaneous Gestalt cues believed to operate over short-time scales in order to locally segregate sound elements ., Second , an analysis of more global cues operates over longer time-scales and aims to learn sequential Gestalt cues that enable tracking dynamics of elements from the first stage at a temporal or melodic level 8 ., Following these stages is a fusion step that combines together segregated elements that constitute different auditory objects , using principles of temporal coherence 39 , 64 , 65 ., The Gestalt analysis stages are learned directly from natural sounds in a generative fashion , allowing each component of the model to represent natural sounds from its own vantage point following principles of stochastic neural networks , as detailed next ., The fusion stage merely organizes or fuses these learned patterns following the concept of temporal coherence , as also detailed later ., Fig 1 depicts a schematic of the overall model ., It takes as input the acoustic waveform of an auditory scene u ( t ) and maps it onto a time-frequency representation , using a biomimetic peripheral model from Yang et al . 66 ., Briefly , this transformation analyzes the acoustic signal u ( t ) using a bank of logarithmically-spaced cochlear filters whose outputs are further sharpened via a first order derivative along the frequency axis , followed by half wave rectification and short term integration over 10ms frames ( see Methods for details ) ., This filterbank analysis results in an auditory spectrogram represented by S ( t , f ) ., The following stage ( called L 1 ) is structured as a two-layer sparse Restricted Boltzmann Machine ( sparse RBM ) with a fully connected visible and hidden layer 67 ., It takes as input 3 consecutive frames of the spectrogram and learns a probability distribution over the set of these short tokens ., RBMs are powerful stochastic neural networks that are conceptually similar to autoencoders but can infer statistical distributions over their input set 68 ., A RBM layer is chosen for this stage in order to explore the space of local spectrotemporal tokens and learn latent cues that represent statistical structures in natural sounds over short time scales ., The visible layer units {xk} are real-valued and characterized by a Gaussian distribution fitted over the input spectrogram S ( t , f ) ; while hidden units {hk} are sampled from a Bernoulli distribution for k = 1 , 2 , \u2026 , K where K is the number of nodes in each layer ., The network is parameterized by \u0398 = {W , A , B} where W represents the interconnected weights between visible and hidden units , and A ( B ) represents the visible ( hidden ) bias , respectively ., The network is trained using a Contrastive Divergence ( CD ) algorithm with the objective to minimize the reconstruction error between x and x ^ = h W + A 69 ., By learning the regularities in local spectrotemporal tokens of natural sounds , the connection weights W effectively span an array of latent cues that reflect the structure of soundscapes ., Our hypothesis is that these latent factors represent the so-called simultaneous cues used as Gestalt principles for sound analysis ., After training , connection weights are transformed into a 2D filter F ( t , f ) , akin to spectro-temporal receptive fields derived from neural activity of biological neurons in the auditory system 70 ., These learned filters are then applied in a convolutional fashion over the incoming spectrogram S ( t , f ) to derive the outputs of layer L 1 nodes ., These responses are further subjected to a neural adaptation stage which imposes a dynamic regulation of the response of each filter hence suppressing units with weak activation ( see Methods for details ) ., L, 1, responses are then processed by the next layer in the model which completes the analysis stage to infer possible sequential cues that extend over longer time constants ., This second layer L 2 is devised as an array of conditional RBMs ( cRBMs ) , which are extended versions of RBMs designed to model temporal dependencies 71 ., Similar to a RBM , a cRBM consists of a visible layer with units {xk} , assumed to arise from a Gaussian distribution fitted over the input , and a hidden layer with {hk} units sampled from a Bernoulli distribution ., Unlike a RBM , a cRBM acts as a dynamical system operating over an entire input history \u03c4 taking as input occurrences at times {t , t \u2212 1 , \u2026 , t \u2212 \u03c4} in order to capture dynamics in the input space over context \u03c4 ., In the current model , we explore sequential cues over a range of temporal contexts and construct an array of parallel cRBM networks over multiple histories ranging in temporal resolutions from \u03c4 \u223c ( 30\u2013600 ms ) ., L 2 is parameterized by \u0398 = {W , A\u03c4 , B\u03c4 , C\u03c4 , D\u03c4} where W represents the interconnected weights between visible and hidden units and capture the interactions across input features over an extended temporal history \u03c4 , A\u03c4 and B\u03c4 represent the visible and hidden biases , respectively , while C\u03c4 and D\u03c4 quantify autoregressive weights between past inputs and the current input ( or current hidden unit , respectively ) ., Just like the localized layer L 1 , the contextual layer L 2 is trained in a generative fashion using contrastive divergence ( CD ) in order to best capture the dynamics in natural sounds using the same dataset of realistic sounds spanning speech , music and natural sounds ., Here again , our hypothesis is that the stochastic cRBM learns latent parameters \u0398 that reflect the sequential cues underlying dynamics of natural sounds over a wide range of temporal contexts ., Once trained , the model parameters are applied to incoming L 1 filter responses in a linear fashion , yielding a multi-resolution output which is then passed over to the next stage in the hierarchy ( see Methods for details ) ., The next layer in the hierarchy focuses on a fusion operation to facilitate the grouping of perceptually-coherent objects ., This binding stage explores co-activations across all L 2 channels within a given context \u03c4 and binds together the units that exhibit strong temporal coherence 64 , 72 ., The \u2018temporal coherence\u2019 theory posits that emergence of perceptual representations of auditory objects depends upon strong coherence across cues emanating from same object and weaker co-activation across cues from competing objects ., This coherence is not an instantaneous correlation but one that is accumulated over longer time scales , commensurate with the contextual windows explored in the L 2 layer ., We implement this concept in a biologically-plausible fashion via mechanisms of Hebbian learning , which suggests that when two neurons fire together , their synaptic connection gets stronger 73 ., Effectively , Hebbian interactions operate by reinforcing activity across coherent channels , hence grouping them into putative objects and inhibiting activity across incoherent channels 74 ., We implement a synaptic interaction across output channels from layer L 2 by introducing a coherence synaptic weight matrix V . If two units i and j are co-activated at a given time t , their corresponding synaptic connection Vij is reinforced over time ., If the correlation between their activity is weak , the corresponding synaptic weight Vij is reduced accordingly ., These synaptic weights are applied to the output of each channel in a dynamic fashion , hence modulating the activity across an entire ensemble of neurons within each context in layer L 2 ., The net effect gives emergence to perceptual coherent groups that represent auditory objects in a scene ., A final read-out stage is then appended to the model to extract responses to different stimuli and test the degree of segregation of different objects , as viewed by the model outputs ( see Methods for details ) ., In order to examine the emergent sensitivity of learned layers in the network , we derive the tuning characteristics of individual nodes or neurons and explore their filtering properties in the modulation domain 75 , 76 ., Modulation tuning reflects stimulus cues that best drive individual nodes in the model both in terms of temporal variations and dynamics ( i . e . temporal modulations or rates ) as well as spectral span and bandwidth ( i . e . spectral modulations or scales ) ., This approach follows common empirical techniques used in electrophyisology and psychophysics to probe the tuning of a system to specific acoustic cues ., It is specifically used to characterize spectro-temporal receptive fields ( STRFs ) which offer 2-dimensional profiles of filtering characteristics of neurons 70 ., First , we employ a classic transfer function method using probe stimuli in order to derive the tuning of both L 1 and L 2 layers of the network 77\u201379 ., We present modulated noise signals ( called ripples ) as input to the model with varying spectro-temporal modulation parameters ( Fig 2E ) and characterize the fidelity of the ripple encoding at various stages of the network as the ripple modulation parameters are varied 80 ., Each ripple is constructed as a broadband noise signal whose envelope is modulated both in time and frequency , with temporal modulation parameter \u03c9 ( in Hz ) and spectral modulation parameter \u03a9 ( in cyc\/oct ) ( see Methods for details ) ., By sweeping through a range of ripple parameters , we compute a normalized modulation transfer function ( MTF ) from the response of layers L 1 and L 2 which quantifies the synchronized response of each layer to the corresponding dynamics in the ripple stimulus ( see Methods for details ) ., L 3 is not a trained layer and hence is not subject to this analysis ., Fig 2A and 2B depict the MTF derived from both L 1 and L 2 ., The functions highlight that both layers exhibit a general low-pass behavior both along temporal and spectral modulations ., As expected , layer L 1 is trained over shorter time-scales and does exhibit faster temporal dynamics along the rate axis , while the contextual layer L 2 is mostly tuned to slower dynamics < 30Hz with a slightly tighter spectral selectivity mostly concentrated below 1 cycles\/oct ., This outcome is very reminiscent of similar transfer functions obtained from neurophysiological data showing contrasting tuning characterizations in the midbrain , auditory thalamus and auditory cortex 81\u201383 , whereby selectivity of individual neurons along the mammalian auditory hierarchy evolves from faster to slower temporal dynamics and from more refined to broader spectral spans along frequency ., We further examine the selectivity of individual neurons and compare emergent tuning characteristics common across nodes in the network by employing an agglomerative clustering algorithm ( see Methods for details ) ., This approach clusters nodes exhibiting similar tuning profiles into common groups hence providing insight into the underlying acoustic cues being processed by each cluster ., Fig 2C and 2D show contour plots from the resulting clusters overlaid on the MTF profiles for layers L 1 and L 2 ., The array of clusters indicates that neurons in each of these layers do indeed exhibit a wide variety of selectivity to spectral and temporal dynamics in the input signal ., We specifically note a cluster of L 1 neurons that is more sensitive to fast transients or \u2018onsets\u2019 ., This group is labeled \u2018O\u2019 in Fig 2C ., An example time-frequency profile F ( t , f ) of a neuron in the \u2018O\u2019 cluster is shown in Fig 2F ( upper-right ) ., We also note a spectrally-structured cluster ( labeled \u2018H\u2019 ) centered around spectral modulations \u2208 1-2 cyc\/oct corresponding to harmonic peaks present in natural sounds ., An example neuron from this cluster is shown in Fig 2F ( upper-left ) and highlights the selectivity to specific frequency bands in the input spectrogram ., The clustering procedure also reveals the presence of oriented spectro-temporally selective clusters , likely tuned to detect frequency-modulated sweeps in the signal over different spectrotemporal scales; as well as other clusters with special selectivity to spectral or temporal features ., Fig 2F ( lower panels ) shows an example of two L 1 neurons with different temporal dynamics contrasting a slow neuron \u2018S\u2019 and a fast neuron \u2018F\u2019 ., We test the model\u2019s behavior with a variety of acoustic scenes ranging from classic streaming paradigms using simple tones to experiments using speech signals ., Crucially , all experiments are tested on the same model ( after all layers have been trained ) , without any adjustment to model parameters ., The stimulus parameters are carefully chosen to closely replicate previously published human perceptual experiments hence allowing a direct comparison between the model and human perception ., All stream segregation results are shown in Fig 3 organized in 3 columns: the stimulus on the left , a replica of human perception of the same stimulus reproduced from the corresponding publication in the center , and the model performance on the right ., As outlined earlier , Fig 3 contrasts the model\u2019s performance against reported human perceptual results in a range of stream segregation experiments ., Next , we reexamine our initial hypotheses; namely that the model is able to infer simultaneous and sequential grouping cues by learning statistical regularities in natural soundscapes ., The experimental results shown in the previous section suggest that simultaneous cues ( tonotopic organization , AM rate , harmonicity , temporal synchrony , etc ) , sequential cues and grouping mechanisms play an important role in streaming paradigms ., In order to shed light on their individual contributions , we run a series of control experiments where we look at malfunctions in the model if certain components of the system are disrupted individually ., The analysis of control experiments quantifies the complementarity of rich feature representation and grouping mechanisms in driving scene segregation ., The proposed architecture faithfully replicates human psychoacoustic behavior on steaming paradigms over wide range of stimuli ranging from simple tones to speech utterances as demonstrated in Fig 3 ., In case of two tone streaming paradigm shown in ( Fig 3A ) , the network exhibits stream segregation when two alternating tones are widely separated across tonotopic frequency axis ., This behavior is consistent with well established psychophysical and physiological findings of stream segregation induced by differences in tonotopic cues 129\u2013132; and relies heavily on the activation of different groups of neurons with distinct frequency selectivities as captured in L 1 ., In absence of temporal correlation between these two groups , the temporal coherence layer aided by the adaptation mechanism suppresses the anti-correlated groups of units , hence inducing stream segregation in the final stage of the network ., However when \u0394F is small enough , there is high degree of overlap resulting in a single stream percept ., This segregation\/integration effect is strongly maintained regardless of a number of manipulations to the model architecture ., The key components crucial to the organization of tone sequences are the presence of tonotopic or frequency selectivity combined with temporal integration that examines activity across neural channels at relatively longer time-scales ., This observation is very much in line with the spatio-temporal view of auditory stream segregation which requires neural channels to be widely separated in addition to temporal asynchrony across these channels 133 ., The interaction of spectral and temporal dynamics during the organization of tone sequences supports the view of stream segregation as a dynamic process ., The buildup effect reported in the current model ( Fig 3B ) is in line with established psychoacoustic behaviors 90 , 134\u2013136 and suggests that segregation of two streams is not instantaneous; but strengthens over time and can lead to segregation when frequency difference ( \u0394F ) is large enough ., The current model highlights that this effect is in fact reflecting the competition across neural channels as viewed by the temporal coherence layer ., The binding of correlated groups of neurons strengthens over time while suppressing the anti-correlated units over time in the same process ., Interaction across multiple features is also noted in other simulations that pit against each other harmonicity , onsets and temporal dynamics ( Fig 3C , 3D and 3E ) ., Simulations using complex tones directly examine the role of localized spectro-temporal tuning in L 1 as an encoding of simultaneous cues such as harmonicity , onset and fast amplitude modulations among others ., Sequential cues emergent in L 2 are crucial in tracking the activity emerging in the localized layer over longer amplitude modulations; which are then fused together in the last L 3 layer ., Through this rich selectivity learned directly from natural sounds , the network offers a wide span of selectivity across the spectrotemporal space ., This tuning proves effective in tackling complex auditory scenes composed of speech with various interferers ., In line with human perceptual data , the model shows that speech tokens are harder to identify in presence of utterances from same corpus compared to babble and cafe noise as the signal-to-noise ratio gets smaller ., The model highlights that this variable response is largely caused by the dominance of neural activity from the interfering set relative to the target ., The distinct activation between target and interferer is further blurred in absence of of slow sequential cues which integrate information about the speech utterance beyond just that target number\/color ., As shown in the control experiments , a network that lacks slow sequential cues is further impaired in making a judgment about the identity of the target token , likely due a to an enhanced confusion between its representation and that of the interferer ., Once this activity reaches the temporal coherence layer , the weakly responsive neurons get suppressed , hence resulting in the actual number\/color token getting wrongly identified as the one in the interfering utterance ., Overall , the proposed model highlights three key results:, ( i ) Using the right configuration , we are able to infer a wide-range of Gestalt cues directly from natural sounds ., The proposed RBM architecture offers a cooperative and nonlinear integration of these cues to result in a multiplexed representation of auditory scenes across various granularities in time and frequency ., By using an unsupervised learning approach , the network is not being optimized for a specific application; rather , it is reflecting the inherent variety of local and global dynamics present in natural sounds ., Possibly , an even deeper neural architecture extending beyond just a few layers could extend the rich feature analysis and fill in the spectrum from local to global hence adding a more refined mapping along with the nonlinear integration naturally offered by the RBM architecture ., ( ii ) Grouping acoustic features is effectively an outlook across all active nodes that allows to piece together the pieces of each auditory object ., This process effectively plays 2 key roles: a grouping role by putting together pieces of a sound object ( effectively integrating together pitch , timbre , rhythm and possibly space information that reflect a common object ) ; and an elimination role by suppressing channels that are irrelevant to the emergence of the foreground object , hence enhancing the signal-to-noise ratio in the network ., Temporal coherence is one such fusion mechanism that has been garnering stronger neural and perceptual evidence 39 , 65 , 125 , 126 ., The current work employs Hebbian learning , a biological simple mechanism that affords such fusion over the rightly chosen time-scales ., ( iii ) Auditory scene segregation is a balancing act of the proper feature analysis along with mechanisms for fusion that give rise of auditory object representations ., While both stages are necessary , neither one is sufficient ., The proposed model offers a unified platform that integrates together these different mechanisms and strategies ., It also bridges the existing physiological theories of scene organization with perceptual accounts of auditory scene analysis ., The proposed model is structured along 4 key stages: initial data pre-processing by transforming the acoustic signal to a time-frequency representation , a local analysis over short time-scales , a global analysis over an array of longer time-scales , then a fusion stage using temporal coherence ., A final readout of the network activity is implemented to extract information from specific streaming experiments to probe segregation of individual streams in the input scene ., Details of each component of the model are outlined next: The acoustic signal is first analyzed through a model of peripheral processing in the mammalian auditory system , following the model by Yang et al . 66 ., Briefly , it transforms the acoustic stimulus sampled at 8KHz into a joint time-frequency representation referred to as auditory spectrogram ., The stage starts with a bank of 128 asymmetric constant-Q filters equally-spaced on a logarithmic axis over 5 . 3 octaves spanning the range 180 Hz to 4000 Hz ( QERB \u2248 4 ) 137 ., By its very nature , the peripheral model uses a non-parametric set of cochlear filters that are fixed over a span of 5 . 3 octaves ( see 66 for details ) ., In the current model , we cap our sampling rate to 8KHz in order to provide ample coverage over lower frequency regions ., After cochlear filtering , the outputs undergo spectral sharpening via first order derivative along frequency , followed by half-wa","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Our current understanding of how the brain segregates auditory scenes into meaningful objects is in line with a Gestaltism framework ., These Gestalt principles suggest a theory of how different attributes of the soundscape are extracted then bound together into separate groups that reflect different objects or streams present in the scene ., These cues are thought to reflect the underlying statistical structure of natural sounds in a similar way that statistics of natural images are closely linked to the principles that guide figure-ground segregation and object segmentation in vision ., In the present study , we leverage inference in stochastic neural networks to learn emergent grouping cues directly from natural soundscapes including speech , music and sounds in nature ., The model learns a hierarchy of local and global spectro-temporal attributes reminiscent of simultaneous and sequential Gestalt cues that underlie the organization of auditory scenes ., These mappings operate at multiple time scales to analyze an incoming complex scene and are then fused using a Hebbian network that binds together coherent features into perceptually-segregated auditory objects ., The proposed architecture successfully emulates a wide range of well established auditory scene segregation phenomena and quantifies the complimentary role of segregation and binding cues in driving auditory scene segregation .","summary":"In every day life , our brain is able to effortlessly make sense of the cacophony of sounds that constantly enter our ears and organize them into meaningful sound objects ., In this work , we use an architecture based on stochastic neural networks to \u2018learn\u2019 from natural sounds which cues are crucial to the process of auditory scene organization ., The computational model delivers a hierarchical architecture that mimics multistage processing in the biological auditory system ., It learns a rich hierarchy of spectral and temporal features that allow the decomposition of an auditory scene into informative components ., These features are then grouped together into coherent objects based on Hebbian learning principles ., Though trained on unrelated datasets of natural sounds , the model is able to replicate human perception of auditory scenes in a wide variety of soundscapes ranging from simple tone sequences to complex speech-in-noise scenes .","keywords":"acoustics, linguistics, neural networks, engineering and technology, signal processing, social sciences, neuroscience, neuronal tuning, computer and information sciences, animal cells, speech, physics, cellular neuroscience, speech signal processing, cell biology, neurons, biology and life sciences, cellular types, physical sciences, cognitive science, modulation, acoustic signals","toc":null} +{"Unnamed: 0":1079,"id":"journal.pcbi.1006286","year":2019,"title":"Computational translation of genomic responses from experimental model systems to humans","sections":"Generalization of insights from disease model systems to the human in vivo context remains a persistent challenge in biomedical science ., The association of molecular features with a phenotype in a model system often does not hold true in the corresponding human indication , due to some combination of the fidelity of the experimental system to human in vivo biology and the inherent complexity of human disorders 1\u20137 ., Though it is now routine to collect clinical samples from patients and associate molecular features with clinical phenotypes , there are discrepancies between the phenotypes measurable in patients and those investigable by use of model systems ., Outside of a clinical trial , novel perturbations to the disease system cannot be directly investigated in the patient in vivo context , whereas model systems can be used to study the impact of innumerable perturbations to the disease system and to associate molecular features with these responses ., As a consequence of this discrepancy , murine and other model systems of disease are likely to remain an important part of biomedical research ., Therefore , methods for improving generalizability of mouse-derived molecular signatures to human in vivo contexts are needed for more impactful translational research ., The utility of mouse models for studying inflammatory pathologies was recently assessed by a pair of studies examining the correspondence between gene expression in murine models of inflammatory pathologies and human contexts 1 , 2 ., In these studies , mouse molecular and phenotype data were matched to human in vivo molecular and phenotype data , enabling direct comparison of genomic responses between mice and humans ., These studies analyzed the same datasets and came to conflicting conclusions about the relevance of mouse models for inflammatory disease research , with Seok et al . concluding that mouse models poorly mimic human pathologies and Takao et al . concluding that mouse models usefully mimic human pathologies 1 , 2 ., A key methodological difference between the two studies was that Takao et al . examined genes significantly changed in both contexts 1 , 2 ., However , in prospective translational studies , the corresponding mouse and human in vivo datasets and perturbations are rarely available making accurate pre-selection of genes that change in both human and mouse contexts unlikely ., Therefore , prospective studies will often need to proceed on the basis of molecular changes in the model system alone ., The aim of our study is to develop a machine learning approach to address the challenge of prospective inference of human biology from model systems ., Here , we consider a machine learning approach successful if it correctly predicts a higher proportion of human differentially expressed genes ( DEG ) and enriched signaling pathways than implicated by the corresponding mouse model ., The essence of our approach is to apply a machine learning classifier to assign predicted phenotypes , derived from a mouse dataset , to molecular datasets of disease-context human samples and to infer human DEGs and enriched pathways downstream of the machine learning model using these inferred phenotypes ., We assessed our approach by testing it on the datasets from the Seok and Takao studies , where mouse phenotypes and gene expression data were matched to patient clinical phenotypes and gene expression data 1 , 2 , 8\u201320 ., While mouse experiments alone failed to capture a large portion of human in vivo biology , using these datasets to train computational models produced more precise and comprehensive predictions of human in vivo biology ., In particular , semi-supervised training of a neural network identified significantly more human in vivo DEGs and pathways than mouse models alone or other machine learning approaches examined here ., We identify aspects of model system study design that influence the performance of our neural network and show that the added benefit of our method is driven by recovery of biological processes not present in the mouse disease models ., Our results suggest that computational generalization of insights from mouse model systems better predicts human in vivo disease biology and that such approaches may facilitate more clinically impactful translation of model system insights ., We assembled a cohort of mouse-to-human translation case studies from the datasets analyzed in Seok et al . and Takao et al . ( Table 1 ) 1 2 ., We defined case studies as all pairs of mouse ( training dataset ) and human ( test dataset ) datasets for the same disease condition ., By constructing case studies in this manner , multiple mouse strains and experimental protocols could be compared to different presentations of that same disease in independent human cohorts ., The final cohort consisted of 36 mouse-to-human translation case studies in which mouse-to-human biological correspondence and machine learning translation approaches could be assessed ( Table 2 ) ., Baseline correspondence between each mouse model and human dataset was assessed by differential expression analysis and Gene Ontology ( GO ) pathway enrichment analysis of differentially expressed , homologous mouse and human transcripts ., We computed the precision and recall of the DEGs and pathways with respect to correspondence between mouse and human datasets and summarized these quantities using two F-scores ., The F-score gave an equal weighting on the correctness of DEG and pathway predictions ( precision ) and how comprehensive ( recall ) the predictions were relative to the human-predicted associations ., The F-scores of the machine learning model predictions were calculated by comparing the algorithm-predicted human DEGs and pathways to those derived using the true human phenotypes ., The mouse predicted DEGs and enriched pathways constituted the baseline performance against which our machine-learning approaches were compared ., We implemented supervised and semi-supervised versions of k-nearest neighbors ( KNN ) , support vector machine ( SVM ) , random forest ( RF ) , and neural network ( NN ) algorithms using Lasso or elastic net ( EN ) regularization as a feature selection method ., By exploring a range of machine learning models with different model structure and varying the regularization parameter \u03b1 , we were able to assess the effect of model structure and feature selection stringency on performance ., In supervised models , a machine learning classifier was trained on the mouse dataset and applied to the human test dataset to infer predicted phenotypes from which we inferred human DEGs and enriched pathways ., In semi-supervised models , a supervised classifier was initially trained on the mouse data alone to predict the human samples ., Following this first step , the predicted human samples with the highest classification confidence were selected to create an augmented mouse-human training set ( Fig 1 ) ., Retraining with the predicted human samples allowed us to humanize the new classifier using unsupervised information from the human test dataset ., The new classifier was then used to reclassify the human samples ., This procedure of retraining , prediction , merging predicted human samples with the training set , and dropping the confidence threshold each iteration terminated when the lowered confidence threshold resulted in merging all human samples with the training set ., The phenotypes associated with the human samples at this step were taken as the final semi-supervised model prediction from which predicted human DEGs and enriched pathways were inferred ., Model DEG and pathway F-scores were computed by comparing the algorithm-predicted DEGs and pathways , using computationally inferred human phenotypes on the human test data , to those identified when using the true phenotypes on the human test data ., We compared the performance of 1 , 728 machine learning classifiers to the mouse-predicted DEG and pathway associations ., Classifier performance was summarized by the area under the receiver operator characteristic curve ( AUC ) for the accuracy of the predicted human phenotypes and the F-score of predicted human DEGs and pathways ., A generalized linear model ( GLM ) was trained to assess the impact of Lasso\/EN regularization \u03b1 values and the type of machine learning classifier on the AUC and DEG F-score performance metrics ., Neither the value of \u03b1 ( p = 0 . 374 ) , nor the type of machine learning approach ( p = 0 . 874 ) significantly impacted the AUC ( S1 Table ) ., However , both \u03b1 ( p = 0 . 0000215 ) and the type of machine learning method ( p = 0 . 000902 ) significantly impacted the F-score ( S2 Table ) ., The significance of the regularization parameter and classifier type for F-score and not AUC suggests that though each model had comparable accuracy , the biological relevance of the predicted phenotypes was significantly influenced by feature selection stringency and machine learning model structure ., Since the F-score directly measured the biological relevance of the predictions made by a particular algorithm , we focused on it as the relevant performance metric , emphasizing gaining biological insights over mere numerical predictive capacity ., We computed the 95% confidence intervals of the F-scores for each machine learning approach and mouse model across all case studies and regularization parameters ( Fig 2A ) ., The overall performance of mouse-derived DEGs for predicting human DEGs was low ( F-score 95% CI 0 . 082 , 0 . 158 ) and though many models significantly outperformed the mouse , the F-scores were still somewhat low indicating an imbalance in precision and recall in some case studies ., We investigated the role that the experimental design of the mouse cohorts may be contributing to this imbalance using a GLM and found that smaller sample sizes and larger class imbalances in the mouse datasets resulted in significantly lower model F-scores ( S3 Table ) ., Though most machine learning models balanced precision and recall , we noted a cluster of models with precision < 0 . 2 and recall > 0 . 3 ( S1 Fig ) ., All of these could be attributed to case studies in which human dataset GSE9960 was the test dataset ( Table 2 ) ., Here , the mouse training datasets were comprised of mouse leukocytes and the poor performance of the models suggests that mouse leukocytes are not reflective of human peripheral blood mononuclear cell ( PBMC ) biology ., We retained case studies with GSE9960 to examine whether our models could add translational value despite this inter-tissue mouse and human discrepancy ., The semi-supervised NN ( ssNN ) , semi-supervised RF ( ssRF ) , KNN , SVM , and RF outperformed the mouse model , with similar behavior found for the precision and recall ( Fig 2A , S4 and S5 Tables ) ., We found that ssNN F-scores were significantly higher than all other models indicating it was the most successful model ( 95% CI 0 . 253 , 0 . 342 , p < 0 . 05 ) ., Finally , we examined the performance of the ssNN across all case studies for each setting of the regularization parameter and found Lasso regularization ( \u03b1 = 1 . 0 ) had the highest F-score across all case studies ( median F-score = 0 . 281 ) ( S6 Table ) ., Based upon the GLM , F-scores , and performance at each value of \u03b1 , we concluded that the ssNN with Lasso regularization was the most broadly effective approach for prediction of human DEGs ., Having identified the ssNN as the most broadly effective model , we examined the genes selected in the semi-supervised training procedure ( Fig 2B and 2C , S7 Table ) ., Most of the genes selected by the ssNN were not concordantly differentially expressed in mouse and human contexts ( Fig 2B ) ., The genes most frequently included in the ssNN models tended to have either strong differential regulation in the human context alone ( e . g LCN2 ) or be among those genes that exhibit concordant differential expression in both mouse and human contexts ( e . g . ARG1 ) ( Fig 2C ) ., Recall that the semi-supervised training procedure begins with a model and features informed only by the mouse training dataset , demonstrated by the cluster of genes exhibiting large mouse fold changes ., That these genes have correspondingly small human fold changes suggests that the neural network is responsive to the addition of predicted human samples in the training procedure and is able to prioritize those genes that are relevant to the human context and ignore those relevant only in the mouse context ( Fig 2B and 2C ) ., We next compared the DEGs and pathways predicted by the ssNN and mouse models in each case study ( Fig 2D , S8 Table ) ., In most cases , the mouse pathway F-score is higher than the DEG F-score indicating that the mouse models considered here are more predictive of human pathway function than differential expression events ( Fig 2D ) ., The correspondence between the enriched pathways identified by mouse models and human in vivo contexts was relatively consistent across disease indications , ( Fig 2D ) , suggesting that mouse models of inflammatory pathologies recapitulate similar proportions of human in vivo molecular biology across indications independent of disease etiology complexity ., Notable exceptions to this pattern of mouse-human pathway correspondence were the endotoxemia and cecal ligation and puncture ( CLP ) mouse models , none of which , had any corresponding human DEGs at permissive statistical thresholds ( WMW p < 0 . 05 , FDR q < 0 . 25 ) ( Fig 2D ) ., Despite this , in 9 of 14 endotoxemia or CLP mouse cases , the ssNN characterized a large proportion of human sepsis biology despite being trained on nonrepresentative mouse models ( Fig 2D ) ., Similarly , in 5 of 6 cases where the human PBMC dataset was the test dataset and mouse leukocyte gene expression was the training set , the ssNN equaled or surpassed the mouse ., These results indicate that the semi-supervised approach provided substantial benefit when mouse models , such as CLP-driven sepsis and LPS stimulated endotoxemia , did not recapitulate molecular features of human disease biology ., In total , the ssNN predicted an equal or greater proportion of human enriched pathways in 29 of 36 case studies ( Fig 2D ) ., In the other cases , the mouse models of Streptococcus Pneumoniae Serotype 2 ( SPS2 ) and Staphylococcus Aureus ( SA ) driven sepsis outperformed the ssNN in particular human cohorts ., A single human sepsis dataset , GSE13015 , where many of the patients had other infections , was implicated in 3 of these 7 case studies 9 ., This suggests that the C57 strain mouse with an SA or SPS2-driven sepsis is an unusually satisfactory direct model for human sepsis with other infectious complications ., The ssNN may have failed to outperform the mouse in these cases due to the heterogeneity of infections in the human cohort , an interpretation supported by the fact that the ssNN outperforms the combined mouse cohort by a wide margin when the AJ and C57 mouse models are combined into a single training cohort ( Fig 2D ) ., Therefore , when predicting biological associations in a heterogeneous human cohort , the ssNN performs better when trained on a heterogeneous mouse cohort ., This diversity of sepsis mouse models in our cohort made it possible to assess the correspondence of different protocols for generating sepsis mouse models to the human disease context ., While CLP mouse models failed to identify any DEGs , the SPS2 and SA sepsis mouse models were both partially predictive of DEGs and pathways in human sepsis cohorts ., The SA mouse sepsis cohort was comprised of two mouse strains , the highly susceptible A\/J mouse strain and the somewhat resistant C57BL\/6J strain 8 ., We were therefore able to compare four cohorts of sepsis models ( SPS2- C57BL\/6J , SA-A\/J , SA-C57BL6J , and SA-mixed ( A\/J and C57BL6J ) ) in order to identify the most representative mouse models of clinical sepsis ., Since pathway predictions had a greater correspondence to human sepsis than DEGs alone , we compared the pathway associations derived from each sepsis mouse model to one another to identify common and distinguishing features of each model ( Fig 3A ) ., In total , 442 pathways and processes were enriched across all human sepsis cohorts and multiple mouse sepsis models correctly predicted subsets of these pathways ., All mouse models and strains correctly identified a set of 112 pathways including signaling by FGFR1 , FGFR2 , FGFR3 , and FGFR4 , and MAPK1 signaling ( S9 Table ) ., This pathway signature of human sepsis appears to be highly reproducible in multiple mouse sepsis models , rendering it a stable signature for assessing therapeutic interventions and benchmarking mouse sepsis models against human data ., Examining mouse sepsis model F-scores by component precision and recall revealed that while aggregating predictions across multiple mouse models improves the coverage of human sepsis pathway predicted , it simultaneously degrades the precision of these pathway signatures and ultimately only accounts for half of the totality of human in vivo sepsis signaling ( Fig 3B ) ., This contrasts with our finding that increasing the heterogeneity of the mouse cohort improved the predictive power of the ssNN suggesting that a heterogeneous mouse cohort contains latent features that the ssNN detects and incorporates into its predictions of human in vivo pathways ., Therefore , a key limitation of these sepsis mouse models appears to be that they lack in depth and correspondence of biological functions to the processes of human in vivo sepsis and that the ssNN is able to recover this missing information through integration with human datasets ., We then compared the combined pathway predictions of all mouse sepsis models to the predictions of the ssNN across all sepsis cases to assess correspondence with human in vivo sepsis pathway signatures ( Fig 3C ) ., The mouse sepsis models confirmed two pathways that the ssNN missed: the CD28-dependent VAV1 pathway and the oxidative stress induced senescence pathway ., The oxidative stress senescence pathway was implicated by both of the SA mouse models in isolation , but not the mixed cohort , while the CD28-dependent VAV1 pathway was specifically implicated in the C57 strain ., Use of a CD28 mimetic peptide has been shown to increase survival in gram-negative and polymicrobial models of mouse sepsis and has been explored as a therapeutic option for human sepsis 21 ., Though the mouse model identified two pathways missed by the ssNN , the ssNN performed with comparable precision to the mouse models overall ( precision = 0 . 72 ) and recovered a strikingly higher proportion of in vivo human sepsis pathways ( recall = 0 . 96 ) ( Fig 3C ) ., Furthermore , the ssNN recovered a set of 163 pathways enriched in human sepsis in vivo that were not identified in any mouse models of sepsis ( S10 Table ) ., These pathways included thrombin signaling , TGF\u03b2 signaling , as well as several RNA transcriptional and post-translational modification-based pathways ( S10 Table ) that all mouse models of sepsis lacked ., Both thrombin and TGF\u03b2 signaling have been shown to play key roles in the pathology of sepsis and have been investigated for therapeutic and prognostic applications in sepsis 22 , 23 24 ., This result suggests that combining context-associated human data with mouse disease model data recovers important aspects of human in vivo signaling ., The lack of fidelity of mouse models for representing complex human biology is one of the most pressing challenges in biomedical science ., Failures of inter-species translation are likely driven by a combination of evolutionary factors , experimental design limitations , and the challenges of comparing biological function between species and tissues 25\u201327 ., It is well known that particular features exist that translate well between model systems and humans , particularly at the level of pathway function 28 , 29 ., However , a key methodological issue in inter-species translation is to consider what will be knowable in prospective translation of a model system experiment , that pre-selection of translatable features is often not possible ., In this study we demonstrate that semi-supervised training of a neural network is a powerful approach to inter-species translation and show that successful translation is dependent upon the computational method , the model system-to-human tissue pairings , and the experimental design of the model systems studies ., The low pathway recall in the sepsis mouse models demonstrates that there are human disease-associated biological functions simply not present in mouse disease biology ., Despite this intrinsic limitation of the mouse , our semi-supervised learning approach prospectively discovers mouse features predictive of human biology , offering a valuable tool for inter-species molecular translation ., The ideal case for characterizing the biology of human disorders would be the availability of comprehensive human phenotype and molecular data from clinical cohorts ., However , since novel perturbations to the disease system cannot be studied in the human in vivo context outside of a clinical trial , mouse disease model systems and emerging human in vitro model systems will continue to play an important role in biomedical research ., It is in this context that we propose a delineation of four categories of Translation Problems , those of generalizing insights from model systems to human in vivo contexts ( Fig 4 ) ., The most challenging case is when only model system molecular and phenotype data are available ( Category 4 ) , where a large proportion of biomedical research falls ., If human-based prior knowledge , such as candidate genes or clinical observations , is available to integrate with model system data , then generalization can be characterized as a Category 3 problem ., In Category 2 problems , condition-specific human molecular data is available to combine with model system molecular and outcome data to characterize human biology ., Inferences from solving Category 2 problems can be further refined with human-based prior knowledge in a Category 1 problem ., Within this framework , our efforts here are best viewed as an approach to Category 2 translation problems in which we show that ssNN modeling provides a framework for integration of high throughput , high-coverage datasets from model system and human contexts for molecular translation ., However , different categories of translation problems have datasets with different properties and will likely require alternative computational methods ., In a recent crowd-sourced competition , a series of challenges were posed for translating molecular and pathway responses between rat and human in vitro models ., No computational methods were broadly effective in across challenge events and it appears that none of the competitors employed semi-supervised machine learning approaches 30 , 31 ., This finding supports our delineation of Translation Problems into different categories defined by the coverage and resolution of the data available for model training ., Other computational translation efforts often use information about how genes change between experimental groups in both model system and human contexts 32 , 33 ., A key advantage to our approach to inter-species translation is that information about gene regulation in the human context is not required for successful modeling ., The driving principle of semi-supervised learning ( transfer learning ) is that combining information from multiple domains can enhance model performance ., In these applications , a set of training data ( Xtrain and Ytrain ) are integrated with a context-related dataset ( Xcontext ) to improve the performance of the algorithm in an approach known as inductive transfer learning ., Our approach is an example of transductive transfer learning , where Xcontext = Xtest , and the test dataset is incorporated into the algorithm training procedure in an unsupervised manner ., Examining machine learning models with different structures allowed us to assess whether different model structures resulted in better performance and how responsive different models structures were to a semi-supervised training procedure ., In the case of KNN and SVM models , the human samples were classified by distances in mouse gene expression feature space , a model structure we found did not gain in performance with semi-supervised training ., By contrast , the NN and RF improved in performance with semi-supervised training suggesting that these approaches are more responsive to reweighting model features by incorporating unsupervised human information ., Although the NN ended up being the most biologically successful model , direct interpretation of NN model weights and neurons remains challenging ., Here , we use the NN as a prediction-only model and derive biological insights in downstream analyses , though as NN interpretability methods advance it may be possible to gain additional biological insights by direct interpretation of the NN model structure ., Despite advances in the fidelity of model system biology to human contexts , generalizability of findings of model system experiments will continue to be a key issue in both basic biology and translational science research 34 , 35 ., Whenever the model system data alone forms the basis of inference , whether through direct interpretation or indirectly through a computational description of the model system\u2019s biology , key aspects of human biology are likely to be overlooked or misrepresented ., Semi-supervised learning approaches that neither aim for a generalizable computational model nor rely on the model system training data alone , recover more relevant human in vivo biology as a downstream consequence of creating good predictions of human phenotype for a specific patient cohort ., This conceptual shift from direct interpretations of model system data to the indirect generalization of model system biology through integration with human data in semi-supervised learning framework has the potential to aid in successful translation of preclinical insights to patients ., Datasets were obtained from Gene Expression Omnibus 36 and selected based on their inclusion in two studies comparing mouse and human genomic responses 1 , 2 ., Since we used the human datasets as test datasets and the mouse datasets as training datasets for machine learning applications , we applied the additional criteria that phenotypes and tissues of origin were comparable between mouse model and human in vivo datasets to ensure comparable training and test cases for algorithm performance comparison ., Based on these criteria , we excluded the acute respiratory distress syndrome and acute infection datasets , and mouse splenocyte samples from GSE7404 , GSE5663 antibiotic treated sepsis mice spleen samples , and GSE26472 mouse liver and lung samples ., The final cohort consisted of 6 mouse cohorts and 7 human cohorts ( Table 1 ) ., Mouse array probe identifiers were converted to gene symbols and mapped to homologous human genes using the mouse genome informatics database 37 , 38 ., If multiple diseases or microarray platforms were used in a dataset , the dataset was partitioned by disease type and array platform to create multiple case studies , resulting in 36 case studies ( Table 2 ) ., Duplicate genes in each dataset in each case study were removed by retaining those genes with the maximum average expression across all samples ., Datasets were z-scored by gene ., We implemented supervised and semi-supervised versions of the k-nearest neighbors ( KNN ) , support vector machine ( SVM ) , random forest ( RF ) , and neural network ( NN ) algorithms ., Simulations showed that three neighbors were sufficient for training the KNN models ( data not shown ) ., Simulations from 10 to 1000 decision trees showed that 50 decision trees were sufficient for training the RF ( data not shown ) ., The NN was a feed-forward neural network with three layers ., The input layer consisted of one node for each feature , the output layer consisted of two nodes , one for each class , and the hidden layer consisted of the average of the number of input and output nodes rounded up to the nearest integer ., NN synapse weights were computed using scaled conjugate gradient backpropagation ., Prior to model training , we performed feature selection with either Lasso or elastic net ( EN ) regularization ., Different values of the regularization parameter \u03b1 were examined to assess the impact of varying the number of features selected for training the supervised and semi-supervised classifiers ( \u03b1 = 1 . 0 , 0 . 9 , 0 . 7 , 0 . 5 , 0 . 3 , 0 . 1 ) ., In the case of supervised classification models , Lasso and EN regularization underwent 10-fold cross validation ( leave one out cross validation for mouse endotoxemia dataset GSE5663 ) to learn a set of features ., These features were then used to train a supervised classifier ( KNN , SVM , RF , or NN ) on the mouse dataset ., The supervised classification model was then applied to the human dataset for that particular case study to infer predicted human phenotypes ., In the case of semi-supervised models , feature selection was performed on the mouse dataset in the same manner as supervised models ., These features were then used to train an initial supervised classification model on the mouse data alone to predict the human samples\u2019 phenotypes ., Following this initial training and prediction step , the human samples with the highest 10% of confidence scores on their predicted phenotypes were combined with the mouse dataset to create a new augmented training set ., In the second iteration , feature selection and model training proceeded using this training set of mouse and human samples ., All human samples in the test set were re-classified and the confidence score threshold of inclusion was dropped by 10% ., Feature selection , model retraining , classification , and training set augmentation continued until all human samples were incorporated into the training set ., Since NN training is inherently stochastic , we specified that the semi-supervised NN would proceed to the second iteration only if more than one human sample was classified into each class ., If this condition as not met after 50 training iterations , the semi-supervised NN proceeded with further training and prediction iterations on the human dataset using an initial model that did not have human predicted phenotypes in both classes ., Classification models were evaluated by their ability to discriminate between human phenotypes and by the extent to which analyzing the human molecular data using the predicted human phenotypes implicated the same genes as using the true human phenotypes ., Classification performance was assessed by the area under the receiver operating characteristic curve ( AUC ) for the test set of human samples ., Differential expression analysis was performed on the homologous mouse and human genes using the phenotypes from the original datasets to identify differentially expressed mouse and human genes ., Following model prediction , differential expression analysis was then performed on the human dataset using the predicted phenotypes ., Differential expression was assessed by the Wilcoxon-Mann-Whitney ( WMW ) test with Benjamini Hochberg False Discovery Rate ( FDR ) correction ( significance: WMW p < 0 . 05 and FDR q < 0 . 25 ) ., GO enrichment was performed on all DEGs in each case study , for the human data , mouse data , and human data with predicted phenotypes using the Reactome pathway database annotation option in GO 39 40 , 41 ., A DEG or enriched pathway identified in the mouse model was considered a true positive ( TP ) if that gene or pathway was also implicated in the human data analyzed using the true phenotypes ., False negatives ( FN ) were DEGs or enriched pathways implicated in the human data , but not implicated by the mouse model ., False positives ( FP ) were DEGs or pathways implicated in the mouse but not in the human data ., DEGs and pathways identified us","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"The high failure rate of therapeutics showing promise in mouse models to translate to patients is a pressing challenge in biomedical science ., Though retrospective studies have examined the fidelity of mouse models to their respective human conditions , approaches for prospective translation of insights from mouse models to patients remain relatively unexplored ., Here , we develop a semi-supervised learning approach for inference of disease-associated human differentially expressed genes and pathways from mouse model experiments ., We examined 36 transcriptomic case studies where comparable phenotypes were available for mouse and human inflammatory diseases and assessed multiple computational approaches for inferring human biology from mouse datasets ., We found that semi-supervised training of a neural network identified significantly more true human biological associations than interpreting mouse experiments directly ., Evaluating the experimental design of mouse experiments where our model was most successful revealed principles of experimental design that may improve translational performance ., Our study shows that when prospectively evaluating biological associations in mouse studies , semi-supervised learning approaches , combining mouse and human data for biological inference , provide the most accurate assessment of human in vivo disease processes ., Finally , we proffer a delineation of four categories of model system-to-human \u201cTranslation Problems\u201d defined by the resolution and coverage of the datasets available for molecular insight translation and suggest that the task of translating insights from model systems to human disease contexts may be better accomplished by a combination of translation-minded experimental design and computational approaches .","summary":"Empirical comparison of genomic responses in mouse models and human disease contexts is not sufficient for addressing the challenge of prospective translation from mouse models to human disease contexts ., We address this challenge by developing a semi-supervised machine learning approach that combines supervised modeling of mouse datasets with unsupervised modeling of human disease-context datasets to predict human in vivo differentially expressed genes and enriched pathways ., Semi-supervised training of a feed forward neural network was the most efficacious model for translating experimentally derived mouse biological associations to the human in vivo disease context ., We find that computational generalization of signaling insights substantially improves upon direct generalization of mouse experimental insights and argue that such approaches can facilitate more clinically impactful translation of insights from preclinical studies in model systems to patients .","keywords":"machine learning algorithms, medicine and health sciences, pathology and laboratory medicine, neural networks, animal models of disease, applied mathematics, neuroscience, learning and memory, simulation and modeling, animal models, algorithms, sepsis, model organisms, mathematics, signs and symptoms, artificial intelligence, cognition, experimental organism systems, memory, research and analysis methods, computer and information sciences, animal studies, gene expression, mouse models, memory recall, diagnostic medicine, genetics, biology and life sciences, physical sciences, cognitive science, machine learning","toc":null} +{"Unnamed: 0":141,"id":"journal.pcbi.1006835","year":2019,"title":"OptRAM: In-silico strain design via integrative regulatory-metabolic network modeling","sections":"Microbial-based cell factories can be used to advance environmentally friendly and economically viable industrial bioprocesses ., Various strategies have been suggested to modify industrial strains to improve desired product yields ., Traditional methods of strain screening mainly rely on mating , hybridization and mutagenesis techniques 1 , 2 , which are time consuming and costly , and have struggled to keep up with current industrial needs ., In 1991 , Jay Bailey proposed the term metabolic engineering to show how using recombinant DNA and other techniques could improve specific metabolic activity in cells by manipulating enzymes , transporters , and regulation to make cells meet human-specified goals 3 ., Rational strain design methods suggest particular genes or enzymes to alter in order to achieve desired strain characteristics for metabolic engineering 4 ., Systems biology is a powerful approach to uncover genotype-phenotype relationships , which can guide rational design-build-test iterations on strains to improve phenotypic properties in metabolic engineering ., Next-Generation Sequencing ( NGS ) 5 and semi-automatic annotation techniques 6 have produced an increasing number of well annotated microbial genomes , enabling the collection of reasonably comprehensive information about which metabolic enzymes are encoded ., This information has greatly contributed to the reconstruction of the genome-scale metabolic models of various organisms 7 ., GEnome-scale metabolic Models ( GEMs ) are mathematical representations of the complete network of known biochemical reactions that can occur in a particular cell , assembled as a collection of metabolites , reaction stoichiometries , compartmentalizations , and gene-protein-reaction associations 8 , 9 ., One of the main analysis approaches of GEMs is the well-known Flux Balance Analysis ( FBA ) 10 , which can predict phenotypes for cells under different genetic and environmental conditions based on the stoichiometric matrix without requiring kinetic parameters 11 , 12 ., It has been demonstrated that computational simulation on GEMs can predict effective engineering strategies for strain design 13 , 14 ., Since the first strain design method OptKnock 15 was proposed in 2003 , several computational methods for efficient automated identification of genetic strain modifications have been developed , such as RobustKnock 16 , OptGene 17 , OptORF 18 , GDLS 19 , and FSEOF ( Flux Scanning based on Enforced Objective Flux ) 20 ., These algorithms have already yielded successful strain design applications ., In an early example , Fong et al . designed E . coli strains for lactate production with a maximum 73% increase by using OptKnock 21 ., Researchers from Tianjin University utilized a GEM of B . subtilis and elementary mode analysis to design an engineering strain for isobutanol production , and experimentally verified a 2 . 3-fold increase compared to wild type strain 22 ., Recently , Otero et al . designed a strain using OptGene to overproduce succinate in S . cerevisiae , and experimentally validated a 43-fold improvement in succinate yield on biomass after directed evolutions 23 ., However , a metabolic model alone has a significant limitation in revealing condition-specific metabolic activity 24 , 25 because gene regulation plays an important role in constraining the particular metabolism available under any given condition ., Also , the complex crosstalking mechanisms between gene regulation and metabolism are not captured by a metabolic model alone ., To overcome the limitation , methods that systematically integrate a transcriptional regulatory network and a metabolic network have been developed 26 , including regulatory Flux Balance Analysis ( rFBA ) 27 , steady-state rFBA ( SR-FBA ) 28 , Probabilistic Regulation of Metabolism ( PROM ) 29 , and Integrated Deduced REgulation And Metabolism ( IDREAM ) , developed by our group 30 ., Modification of gene regulatory circuits is an important strategy for transforming engineering strains 14 ., In fact , modifications of regulatory factors ( e . g . upregulation of biosynthetic pathway activators ) contribute to more than half of the genetic operations in E . coli and S . cerevisiae engineering strains , but most of these interventions are based on human intuition 31 ., Therefore , some strain design methods have utilized transcriptional regulation information to propose more effective metabolic engineering strategies ., OptORF 18 was the first approach using integrated regulatory-metabolic models , which followed the framework of two-layer optimization as did OptKnock ., In 2011 , the heuristic strain design method OptGene also updated a version which introduced integrated regulatory-metabolic models 32 ., In 2012 , a series of approaches based on minimal cut sets ( MCSs ) was further developed to include a new tool ( rcMCSs ) , that incorporates regulatory constraints 33 ., However , since the above algorithms used manually curated integrated regulatory-metabolic models , where the regulatory network is a Boolean network , there are some limitations to application ., Firstly , only some well-studied microorganisms may have existing integrated networks , such as E . coli 34 , M . tuberculosis35 , and yeast 36 ., Reconstructing such models requires extensive manual adjustment and additional information for generating Boolean logic rules in the regulatory network 37 , which hinders the ability of these algorithms to be broadly applicable across many organisms ., Secondly , these algorithms have to assume that the target gene is completely active or inactive , which ignores the range of possible regulatory intensities between regulatory factors and the target genes ., In addition , Boolean networks can only suggest the manipulation of transcription factor by knockouts ( ON to OFF ) and cannot provide guidance for more quantitative adjustment of transcriptional regulation ., Recently , a method named Beneficial Regulator Targeting ( BeReTa ) , used gene expression to infer the interaction between regulatory factors and target genes , combined with FSEOF 20 for identifying transcription regulators to enhance desired production ., According to the correlations between the different transcription factor expression levels and target reaction flux rates , beneficial scores are calculated to judge whether the transcription factor can enhance or inhibit the target reaction ., The algorithm was applied to E . coli , as well as S . coelicolor , which does not currently have an integrated metabolic-regulatory model ., BeReTa represents a significant advance , but it cannot predict an expected product rate or yield of the mutant , or make predictions about the combined manipulations of multiple sites ., Herein , we report a new strain design algorithm , named OptRAM ( Optimization of Regulatory And Metabolic Networks ) , which can identify combinatorial optimization strategies including overexpression , knockdown or knockout of both metabolic genes and transcription factors , based on our previous IDREAM integrated network 30 ., OptRAM also aims to achieve optimal coupling between biomass and target production , and can be used for strain design of bacteria , archaea or eukaryotes ., The other advantage of OptRAM compared with previous heuristic approaches is that we systematically evaluated the implementation cost of different solutions and selected strain designs which are more likely to be achieved in experiments ., Saccharomyces cerevisiae S288c has been studied and simulated extensively through a series of models 38 reconstructed based on the genome sequence and literature annotations 39 ., We used the latest metabolic reconstruction , Yeast 7 . 6 , which includes 3493 metabolic reactions , 2220 metabolites and 909 metabolic genes ., Integration of a gene regulatory network with a metabolic network at the genome-scale poses significant challenges , in part because they are distinct network types requiring very different modeling frameworks ., PROM uses probabilities to represent gene states and TF\u2013gene interactions from abundant gene expression data , and then uses these probabilities to constrain the fluxes through the reactions controlled by the target genes 29 ., A limitation of PROM is that a pre-built transcriptional regulatory network is required as an input ., In our previous work , we developed a framework called Integrated Deduced REgulation And Metabolism ( IDREAM ) , which uses bootstrapping-EGRIN-inferred 40 , 41 transcription factor ( TF ) regulation of enzyme-encoding genes , and then applies a PROM-like approach to apply regulatory constraints to the metabolic network ., In Yeast , we collected 2929 microarray datasets with 5939 yeast genes and evaluated 392 of those genes as possible regulators ., For each of the 5939 target genes , we constructed separate models from 200 randomly selected subsets of the 2929 experiments , as well as a 201st model constructed using the entire data set ., This resulted in 201 generated gene regulatory models for each of the 5939 yeast genes , for a total of 1 , 193 , 739 models ., For each gene , we estimated a false discovery rate ( FDR ) for each factor by tallying the fraction of models from random subsets that identified that factor as a regulator ., Thus , if factor X was predicted to regulate gene Y in 191 of 200 models , then X would have an FDR = 1\u2013191\/200 = 0 . 045 ., If X is predicted to activate Y with an FDR of 0 . 045 , only 4 . 5% of Y\u2019s activity would be predicted to remain if X was deleted ., If X is predicted to deactivate Y , then we use the much larger 1-FDR ( e . g . , 95 . 5% of activity ) to represent that Y is somehow disturbed without a significant reduction in activity ., We included only those interactions that passed an FDR cutoff of 0 . 05 ., Then we predicted whether a factor was an activator or repressor by testing if its mRNA expression was correlated or anti-correlated with the expression of its target using the model from the entire expression dataset ., Finally , we retrieved an integrated regulatory-metabolic network including 2626 inferred influences consisting of 91 TFs transcriptionally regulating 803 genes encoding enzymes of the metabolic network ., It should be noted it is impossible to cleanly differentiate between the false discovery rate and the strength of the regulatory role for multi-cell microarrays or RNA-Seq because from bulk measurements we can\u2019t differentiate a strong regulation occurring in a small portion of cells from a weak regulation happening in a large fraction of the cell population ., In other words , because we are using a ground up mixture of cells , then we might reasonably expect a few cells with high expression related to a rarely used promoter to appear similarly to a low level , frequently used promoter ., Perhaps single cell RNA-Seq will help disentangle this problem in the future ., OptRAM is a meta-heuristic strain optimization method based on an integrated model of an inferred regulatory network and a constraint-based metabolic network ., It aims to identify the modifications of TFs and metabolic genes , including overexpression , knockdown , and knockout , to achieve the maximal production of desired chemical ., OptRAM will simulate a series of mutations to get the optimized strategy for target overproduction by simulated annealing ., We adopt 11 kinds of mutations on gene expression of TFs or metabolic genes , represented as FC ( TF ) and FC ( G ) , with overexpression and knockdown fold change of 2 , 4 , 8 , 16 , 32 respectively , and 1 knockout , as shown in Table 1 . The expression level of these genes will be translated to corresponding metabolic reactions by the integrative network ., First , expression levels of metabolic genes are calculated according to expression of corresponding transcription factors ., In the EGRIN algorithm , a linear equation of the target gene and the TFs is generated:, target=coeff1TF1+coeff2TF2+\u22ef+coeffnTFn, ( 1 ), Where the variable target is the expression level of a target gene regulated by n TFs , TFi are the expression level of these TFs , and Coeffi are the corresponding coefficients of each TF ., In OptRAM , for a target gene regulated by one TF , tfExpr is the relative expression level of the mutated TF , then the relative expression level of the target gene is calculated as:, targExpr=2coeff\u00d7log2tfExpr, ( 2 ), When a target gene is affected by more than one TF , the expression level of the target gene is calculated as:, targExpr=2\u2211incoeffi\u00d7log2tfExpri, ( 3 ), Having the relative expression level of all metabolic genes , the change of relevant reactions , represented as FC ( R ) , is calculated according to the gene-reaction rules in the metabolic model ., For reactions with \u2018AND\u2019 rules of different genes , we selected the minimum value of relative gene expression level , since \u2018AND\u2019 indicates that the combination of multiple enzymes is required , so the enzyme with lower expression determines the upper bound of the reaction rate ., For reactions with \u2018OR\u2019 rules of different genes , the mean value of relative gene expression level is calculated , because \u2018OR\u2019 indicates that multiple enzymes have the same catalytic function and can be substituted for each other ., Therefore , the average expression level of enzymes in the set can better reflect upper bound of the reaction rate ., While the average was used herein , it would also be reasonable to use the max of the enzymes being expressed in this scenario ., In order to simulate the flux change of reactions induced by the gene expression mutation , we first need a reference flux value for each reaction , which is obtained by pFBA ( Parsimonious enzyme usage FBA ) method 42 ., pFBA is an algorithm based on FBA ., For a metabolic network with M metabolites and N reactions , the FBA formulation is shown below:, MaximizevobjectiveSubjectto\u2211j=1NSijvj=0 , i=1 , \u2026 , Mlbj\u2264vj\u2264ubj , j=1 , \u2026 , N, ( 4 ), Where Sij stands for the stoichiometric coefficient of metabolite i in reaction j , and vj stands for the flux of reaction j , lbj and ubj are the constraints for reaction j ., The most commonly used objective function ( vobjective ) is biomass synthesis 43 ., Here the simulation condition for yeast metabolic flux was set corresponding to the YPD medium , with glucose 20 g\/L and blocking other carbon source ., The pFBA algorithm is divided into three steps ., First , the max biomass rate is obtained by FBA with the original model ., Secondly , the constraint of biomass is set equal to the max biomass value ., Finally , a new objective function is set as the minimization of total flux values carried by all reactions , to generate the flux distribution ., According to the reference flux values from pFBA and the level of expression change for mutated reactions , we set the new constraints of reactions as shown in Table 2 , where FC ( R ) is the change of reaction , v is the reference flux value , lb and ub are the original lower bound and upper bound for the reaction ., In the previous meta-heuristic strain optimization methods , such as OptGene , BPCY ( biomass-product coupled yield ) is used as the objective function 17:, BPCY=Product\u00d7GrowthSubstrate, ( 5 ), Where Product represents the flux of the reaction synthesizing the desired product , Growth represents the flux of biomass , and Substrate represents the uptake rate of the nutrient substrate ., The ultimate goal of the optimization algorithm is to identify the mutated solution with the largest BPCY value , which ensures a considerable growth when improving the target product ., A limitation of the simulation using pFBA is that this framework does not guarantee that the target reaction flux will be coupled to biomass ., That is , even if the BPCY score of a mutated solution is high , the flux value of the target reaction is unstable with the max biomass ., Because the flux variability of target reaction is a wide range and the minimum flux may even be zero , there is no guarantee that the target product can have a certain output under natural growth ., Moreover , since the objective function of pFBA is biomass , there is often no flux through the desired target reaction , although the flux range of that reaction may be 0 to a positive value ., In this situation , BPCY remains 0 and the algorithm reports no feasible solution ., Therefore , we defined a new objective function ( Eq 6 ) in OptRAM to couple maximizing biomass production and the target reaction of interest ., Where Target=Vmax+Vmin2 , Range=Vmax\u2212Vmin2 , Vmax is the maximum flux value of target reaction and Vmin is the minimum flux value by FVA ( flux variability analysis ) 44 ., Target means the average flux value of target product ., Range is set to half of the interval between min and max target flux value ., When Vmin is 0 , RangeTarget=1 , the coefficient ( 1\u2212logRangeTarget ) is 1 . And when Vmin is greater than 0 , RangeTarget>1 the coefficient will be greater than 1 , which is a reward coefficient for BPCY ., Compared to BPCY , this objective function will induce solutions to have a higher and narrower flux range of target product , which reduces the uncertainty caused by alternative solutions in constraint-based modeling ., Hence , by using the refined objective function , OptRAM can provide solutions with better biomass-product coupled ., Fig 1 illustrates the strategy of the OptRAM approach , and the detailed pipeline can be downloaded from supplemental files ( S1 Script and S1 Code ) ., OptRAM requires a transcriptional regulatory network ( or a gene expression data set ) and a genome-scale metabolic model as input ., Then IDREAM method will be run to get an integrated model ., For organisms with no existing TRN , users can input a set of expression data from which the IDREAM method will automatically infer the TRN and generate an integrated model ., Then the core strain design process within OptRAM will simulate a series of mutations to get the optimized strategy for target overproduction ., The output from OptRAM includes the maximized objective score , flux of the target reaction , and the corresponding mutated solution with suggested modification of TFs and\/or metabolic genes ., We used simulated annealing for the core strain design part , which is able to accept a worse solution in the early stage ( avoiding getting stuck in local maximal ) , to pursue finding the global optimal screening for mutated models ., The simulated annealing algorithm is derived from the simulation of the solid annealing process , an idea first proposed by Metropolis in 1953 45 ., In 1983 , Kirkpatrick et al . introduced the idea of simulated annealing algorithm into the field of optimization problems , making the algorithm practical in engineering 46 ., The simulated annealing algorithm introduces Metropolis criterion , which help escaping local optimum , to accept new solutions , including not only better solutions but also worse ones , according to the probability ., By simulating the drop of temperature , the algorithm controls parameters during the process and gives an approximate optimal solution in polynomial time ., In our algorithm , we replace the internal energy of the annealing process with the refined objective function Obj ( Eq 6 ) , which is the prospective score for each mutated strain ., The following steps show the implementation of the simulated annealing algorithm in our specific optimization problem: 1 . Initialize the simulated annealing parameters , including the initial temperature T0 of the control parameter T , the attenuation factor ( \u03b1<1 ) , and the maximum number of iterations L at each temperature ., Then generate the initial solution , Ind0 ., 2 . When T = T ( k ) , search L times according to the following process: ( 1 ) For the current solution Indk , randomly mutate the expression of TFs and metabolic genes , translate to effects on reaction flux , and get a new Obj score and a new mutation solution Indk ., ( 2 ) Calculate \u0394Obj = Obj ( Indk ) \u2014Obj ( Indk ) , where Obj ( Ind ) is the value of objective function for each solution ., ( 3 ) If \u0394Obj > 0 then Indk is received as the new solution , let Indk = Indk; otherwise generate a random number R on the even distribution in ( 0 , 1 ) , calculate the probability P according to Metropolis criterion:, P=e\u0394ObjT ( k ), ( 7 ), If R
10 \u03bcm away from the apical surface could be automatically displaced through direct nucleus\u2013nucleus repulsions ( probably between G1-phase cells ) alone , as long as movements of other chronologically different nuclei ( i . e . , apical arrival of G2-phase cells\u2019 nuclei\/somata and the initial basal displacement of early G1-phase cells\u2019 nuclei\/somata in the subapical within 10 \u03bcm space ) were secured ( S4B\u2013S4G Fig ) , supporting the previously proposed idea of passive IKNM 9 , 21 , 25 , 26 ., By contrast , the subapical ( within 10 \u03bcm ) nuclei\/somata strongly required a basal acceleration by the \u201cnon-collision\u201d mechanism , the absence of which resulted in severe disruption of overall IKNM and the VZ structure ( Fig 7E\u20137G , S25 Movie ) , suggesting that this initial basalward step may be critical or rate limiting for a high degree of overall pseudostratification ., As we experimentally demonstrated in Fig 6 , the initial 10-\u03bcm basalward step may strongly depend on an external , elasticity-dependent mechanism ., Therefore , it is very likely that the initial basal nucleokinesis step , which is more rapid and directional than the subsequent ( more basal ) nucleokinesis ( Fig 5 ) and requires external elasticity , is critical for the ordered brain histogenesis ., In diverse proliferative cell types , each non\u2013M-phase cell will eventually enter M phase , and the M-phase cell will then generate two non\u2013M-phase ( daughter ) cells ., Non\u2013M-phase cells and M-phase cells are therefore in a producer\u2013product relationship ., This chronological ( cell cycle ) partnership depends on intracellular chemical reactions , such as the activation and disappearance of cyclins , but a recent study on the Drosophila epithelium showed M-phase cells\u2019 nonchemical ( physical ) contribution to tissue morphogenesis 44 ., Following the previously established concept of a mother-to-daughter morphological gift in the developing mouse cortical VZ , i . e . , asymmetric inheritance of each M-phase cell\u2019s basal process\/fiber by one daughter cell 6 , 18 , the present study revealed that each M-phase cell also gives mechanical energy to both daughter cells , with elastic assistance from the densely packed apical processes of neighboring non\u2013M-phase cells ( via contractility of the apical surface ) ( Fig 8 ) and other M-phase cells that divide ( to generally increase the pressure of the subapical space ) ( S4H Fig ) ., These mother-to-daughter ( intra-clonal ) physical gifts assist daughter cells\u2019 prompt nucleosomal movement away from the subapical space ., Thus , such established initial basal nucleokinesis enables non\u2013M-phase cells to have thin and flexible apical processes throughout the subapical space , which is permissive for the voluminous division of new M-phase cells ., This permissiveness can be regarded as a mechanical gift from non\u2013M-phase cells to M-phase cells ., These bidirectional physical collaborations ( S4H Fig ) may underlie efficient and safe intra-neuroepithelial nuclear\/somal logistics and protect the subapical space from local overcrowding ., Existence of too many nuclei\/somata in the subapical space prevents progenitor cells from freely dividing at the apical surface and induces them to abnormally leave the apical surface , leading to heterotopic divisions and disruption of histogenesis 18 ., Thus , give-and-take relationships ( i . e . , mutualism ) exhibited spatially and physically between M-phase cells and non\u2013M-phase cells ( S4H Fig ) are essential for ordered brain development ., Previous studies showed that the basal nucleokinesis is mediated by active intracellular mechanisms dependent on kinesin\/microtubules 23 and actomyosin 22 ., Our IKNM simulation ( virtual VZ ) ( Fig 7 ) showed that collective basalward IKNMs at basal VZ levels ( more than 10 \u03bcm from the apical surface ) can occur almost passively using soma\u2013soma collisions between G1-phase cells\u2019 nuclei\/somata as a major driving force ., Although this result is consistent with a model of the passive basal IKNM suggested by Norden et al . 21 , 25 and Kosodo et al . 26 , the present study cannot address the relative importance of the collision-based passive mechanism compared to the cell-intrinsic ( kinesin- and\/or actomyosin-dependent ) basal nucleokinesis mechanisms ., In the subapical space ( within 10 \u03bcm from the apical surface ) , we found a novel passive IKNM mechanism mediated by tissue elasticity and indirect energy transfer ., We cannot precisely determine the relative importance of this boosting mechanism compared to the kinesin- and\/or actomyosin-dependent mechanisms in the subapical space ., Nevertheless , the existence of the Windkessel-like ( elasticity-based ) boosting mechanism is strongly suggested based on ( 1 ) the elastic property of the subapical space ( Fig 3 ) , ( 2 ) the more directional displacement of nuclei\/somata during the initial 30 min than later periods ( with more superlinear MSD curves ) ( Fig 5 ) , and ( 3 ) the decompression and compression experiments that resulted in deceleration and acceleration of initial basal nucleokinesis , respectively ( Fig 6 ) ., We speculate that a Windkessel-like boosting mechanism may collaborate with cell-intrinsic basal nucleokinesis mechanisms ., For example , centripetal mechanical stimuli applied externally from the subapical space might also trigger intracellular molecular machinery for active nucleokinesis , a possibility to be studied in the future ., Similarly , cell cycle progression , which is associated with ( or upstream to ) intracellular molecular mechanisms for nucleokinesis , might also be modulated by external mechanical stimuli\u2014another question to be addressed for better system-level understanding of intra-VZ collective behaviors of NPCs ., Atomic force microscopy ( AFM ) revealed that the elastic modulus on\/near the apical surface was much greater ( 1 , 400 Pa ) 45 than that in more basal VZ regions ( about 100 Pa ) 46 ., The present study showed that an elasticity-based mechanism assists in an initial basalward step in IKNMs ., Actually , diverse biological systems for coordinating heterogeneous movements or flows similarly utilize elasticity as a means to minimize total energetic expenditure ., In mammalian running 47 , 48 or insect flying 49 , elastic energy stored at one stage in the stride or wingbeat is released at another ., Likewise , in blood circulation , the aorta flexibly receives blood ejected from the heart and elastically recoils to forward it ( Windkessel effect 28 ) ., Thus , such a commonly used strategy might participate in multiple aspects or phases in the VZ growth or the overall brain development , and perhaps in the neocortical evolution ., We previously showed that elasticity ( stiffness ) measured on\/near the apical surface of the neocortical VZ by AFM was greater in ferrets than in mice 45 ., We also found in slice culture that NPCs\u2019 initial basal nuclear\/somal movement was quicker in ferrets than in mice 27 ., It is therefore possible that elasticity in the subapical space participates in the differential IKNM behaviors between mice and ferret ., Our mathematically IKNM-simulated virtual VZ showed that the initial basalward nucleokinesis step is important for the overall high-degree pseudostratification ( Fig 7 ) ., The degree of pseudostratification ( i . e . , the thickness of NE\/VZ ) increases from mice to monkeys 13 and even further in humans 14 , 15 ., It would be meaningful to study the possible contribution of tissue-level or cell-level elasticity to the thickening of VZ during evolution ., Our AFM revealed that single dissociated NPCs are stiffer in mice than in ferrets 45 ., Adding such measurable parameters more into our virtual VZ method would be effective to improve our simulation and to expand its feasibility in species other than mice ., The animal experiments were conducted according to Japanese Act on Welfare and Management of Animals , Guidelines for Proper Conduct of Animal Experiments ( published by Science Council of Japan ) , and Fundamental Guidelines for Proper Conduct of Animal Experiment and Related Activities in Academic Research Institutions ( published by Ministry of Education , Culture , Sports , Science and Technology , Japan ) ., All protocols for animal experiments were approved by the Animal Care and Use Committee of Nagoya University ( No . 29006 ) ., R26-Lyn-Venus transgenic mice ( accession No . CDB0254K ) , R26-H2B-mCherry transgenic mice ( accession No . CDB0239K ) 18 , 50 , and R26-ZO1-EGFP transgenic mice ( accession No . CDB0260K ) 45 , 51 were provided by Toshihiko Fujimori ( NIBB , Japan ) ., Pregnant ICR mice were obtained from SLC ., Embryos at the mid-embryonic stage ( embryonic day E 13 \u00b1 1 ) were used ., Image processing , nuclear tracking , and calculation of cross-sectional area and fluorescent intensity were performed using ImageJ ., Fluorescent intensity of Lifeact-EGFP at the level of the soma or the basal pole ( as depicted in Fig 4C and 4D ) was determined by measuring the maximal pixel intensity ( i . e . , brightest spots or bands ) on the horizontally sectioned cell cortex ., Reconstructed fluorescent 3D images were obtained using Volocity ( PerkinElmer ) ., \u201cDeparture\u201d of each newborn daughter cell\u2019s nucleus\/soma from the subapical space under en face observation ( as discussed in Fig 6 ) was defined in the horizontal sectional plane 5 \u03bcm from the apical surface as the time when its diameter became smaller than 3 \u03bcm ( including complete disappearance ) ., Onset of basal displacement of the nucleus\/soma of a newborn daughter cell under cross-sectional observation ( as discussed in Fig 5 , S3D and S3E Fig ) was defined as the time when the mass center of the nucleus\/soma moved 1 . 5 \u03bcm basally ., MSD was obtained based on time-dependent changes of nuclear position along the apicobasal axis , as described previously 18 ., To determine whether the apical surface was stable , without exhibiting local tilting or floppiness of apices even where mitosis occurred , we analyzed a 3-min-interval movie obtained by en face observation of the apical surface of a cerebral wall prepared from a R26-ZO1-EGFP transgenic mouse 51 at 0 . 5-\u03bcm z intervals ., Cell borders at the apical surface level were extracted from the zero-crossing points for first-order derivatives obtained by applying 1D Savitzky-Go","headings":"Introduction, Results, Discussion, Materials and methods","abstract":"Neural progenitor cells ( NPCs ) , which are apicobasally elongated and densely packed in the developing brain , systematically move their nuclei\/somata in a cell cycle\u2013dependent manner , called interkinetic nuclear migration ( IKNM ) : apical during G2 and basal during G1 ., Although intracellular molecular mechanisms of individual IKNM have been explored , how heterogeneous IKNMs are collectively coordinated is unknown ., Our quantitative cell-biological and in silico analyses revealed that tissue elasticity mechanically assists an initial step of basalward IKNM ., When the soma of an M-phase progenitor cell rounds up using actomyosin within the subapical space , a microzone within 10 \u03bcm from the surface , which is compressed and elastic because of the apical surface\u2019s contractility , laterally pushes the densely neighboring processes of non\u2013M-phase cells ., The pressed processes then recoil centripetally and basally to propel the nuclei\/somata of the progenitor\u2019s daughter cells ., Thus , indirect neighbor-assisted transfer of mechanical energy from mother to daughter helps efficient brain development .","summary":"The development of large brain structures , such as the mammalian cerebral cortex , depends on the continuous and efficient production of cells by neural progenitor cells ., Neural progenitor cells are elongated and span the developing brain wall ., The nuclei and bodies of these cells move cyclically between the apical and basal surfaces , and they divide every time they reach the apical surface ., While we understand how individual cells achieve this cycle , how the movements of several progenitor cells are coordinated with one another remains elusive ., By using a combination of live imaging and mechanical experiments , coupled with mathematical simulations , we show that cell crowding at the apical surface , where progenitor cells divide , creates a subapical microzone that is compressed and elastic ., We then show that when each mother cell rounds up , preparing for division , it pushes this elastic microzone laterally , thereby storing mechanical energy ., After cell division , this mechanical energy is transferred to the daughter cells , propelling them along the axis of movement in the direction of the basal surface , in an energy-saving manner ., Our mathematical simulations show that timely departure of newly generated daughter cells is critical for the overall tissue structure of the cerebral proliferative zone .","keywords":"g1 phase, classical mechanics, engineering and technology, cell cycle and cell division, lasers, cell processes, g2 phase, mechanical energy, molecular motors, actin motors, stem cells, microscopy, optical equipment, motor proteins, research and analysis methods, contractile proteins, animal cells, proteins, scanning electron microscopy, physics, biochemistry, cytoskeletal proteins, cell biology, equipment, electron microscopy, myosins, biology and life sciences, cellular types, physical sciences","toc":null} +{"Unnamed: 0":1411,"id":"journal.pcbi.1004808","year":2016,"title":"Reconstruction of Tissue-Specific Metabolic Networks Using CORDA","sections":"Genome-wide Metabolic Reconstructions ( GEMs ) computationally model the molecules and reactions responsible for metabolism in any given organism , and have been applied across a variety of fields including metabolic engineering and evolutionary analysis 1 ., Computational methods developed to study GEMs 2 have generated novel hypotheses about the structure of metabolic networks in microorganisms , and helped elucidate gaps in our knowledge of metabolism 3 , 4 ., Since the publication of the comprehensive human metabolic reconstruction Recon1 5 , human GEMs have enabled the study of human metabolism at a genome level 6 ., These studies include the prediction of novel metabolic functions 7 , prediction of metabolic biomarkers for congenital genetic disorders 8 , 9 , context analysis of omics data 10\u201312 , comparison between humans and other mammals through gene homolog mapping 13 , 14 , and prediction of suitable cancer drugs 15 , 16 and drug targets 17\u201319 ., A particularly prolific subfield of human GEMs is the development of tissue-specific reconstructions ., Different groups of metabolic reactions occur in different cell types ., Hence , numerous studies have been dedicated to generating tissue specific or cell specific models of metabolism 20 , 21 ., These tissue-specific reconstructions can be built by piecing together the model based on previously established biological evidence obtained by reviewing the literature 22\u201326 , through the integration of omics data and computational methods in order to tailor generic , published human reconstructions 5 , 9 , 27\u201329 to the desired cell type 15 , 16 , 30\u201333 , or through a combination of computational algorithms and manual curation 27 , 28 , 34\u201336 ., Automated tissue-specific reconstruction algorithms developed to date can be broadly categorized into two groups 20: \u201cflux-dependent\u201d and \u201cpruning\u201d methods ., Flux dependent methods find an optimal flux distribution through the general reconstruction which contains the maximum number of high confidence reactions ( i . e . reactions whose presence is supported by significant experimental data ) 15 , 31 , 32 , 37\u201339 ., These algorithms have been successfully used to predict gene essentiality in cancer tissues 19 , 33 , cancer specific metabolic pathways 31 , metabolic biomarkers for congenital genetic disorders 8 , 9 , and cancer specific anti-growth factors 15 , 16 ., One of the main advantages of flux-dependent methods is the fact that they predict a flux distribution along with the tissue-specific model 20 ., While this characteristic can be desirable , it also renders flux-dependent reconstructions \u201csnapshots\u201d of the metabolic state defined by the data , as opposed to comprehensive , functional metabolic models 15 , 20 ., The second category of tissue-specific reconstruction methods are pruning algorithms , which include MBA 34 , mCADRE 30 and fastCORE 40 ., Models generated using these algorithms have been used to calculate metabolic flux values in hepatocytes 34 , identify pathways specific to cancer 30 , and predict cancer drug targets 17 , 18 ., These algorithms start with a core set of reactions , obtained through literature review or experimental data , and proceed by removing the remaining reactions in the generalized human reconstruction while maintaining functionality in the core set ., In these algorithms , a tradeoff can be defined between maintaining the model as concise as possible and including all core reactions ., That is , if a core reaction requires too many undesirable reactions to carry flux , the algorithm may remove this core reaction from the tissue model , a tradeoff referred to as flexible core ., There are two main advantages to defining a core set of reactions before performing the tissue-specific algorithm ., The first advantage is the possible inclusion of multiple sources of data and biochemical information 20 , 34 ., The definition of the reactions core is left to the user\u2019s discretion , allowing for both the combination of data sources and the manual inclusion of reactions ., Secondly , reactions with overwhelming evidence are always included in the final tissue model , since a non-flexible set of high confidence reactions can be defined 20 ., This pruning approach then allows for the construction of comprehensive tissue models , containing all reactions that may be in a tissue\u2019s metabolism , as opposed to a snapshot of the metabolic state returned by the flux-dependent methods 15 , 20 ., Current pruning methods are also accompanied by two major limitations , however ., First , the order in which reactions are removed from the model plays a major role in the final reconstruction ., Second , similar to flux-dependent methods , current algorithms aim to keep the final tissue-reconstruction as concise as possible , an approach referred to as parsimonious ., These algorithms aim to remove from the tissue-specific model all reactions for which experimental data is unsupportive or unavailable , such as reactions with low levels of gene expression or non-gene associated reactions ., While a concise tissue-specific reconstruction is desirable , keeping the reconstruction as parsimonious as possible may lead to the removal of fundamental reactions and physiologically unlikely flux distributions ., In Recon 1 , for instance , oxygen and H2O exchange reactions can be removed from the reconstruction with no effect on model functionality ( Fig 1A ) ., During simulations , however , these would be replaced by the uptake of the toxic metabolites superoxide anion and hydrogen peroxide respectively , leading to the prediction of physiologically inaccurate flux distributions ( Fig 1A ) ., The oxygen exchange reaction is in fact not present in the MBA and mCADRE liver reconstructions , and the water exchange reaction is not present in the mCADRE liver reconstruction ., Hence , in order to ensure our algorithm did not rely on alternative , physiologically unlikely pathways , and that it was independent of any ordering assignments , we chose to take an approach which was not parsimonious ., Here we introduce a novel tissue-specific reconstruction algorithm based on Cost Optimization Reaction Dependency Assessment ( CORDA ) ., CORDA returns a concise , functional tissue-specific reconstruction , and features a flexible reactions core ., CORDA does not depend on Flux Variability Analysis 41 or Mixed Integer Linear Programming ( MILP ) problems , but only on Flux Balance Analysis 42 ( FBA ) , which is dependent on Linear Programming ( LP ) ., This characteristic renders CORDA considerably faster than previous , similar methods ., Finally , the CORDA algorithm returns reaction associations that assist in any manual curation to be performed following the automated reconstruction process ., In line with previous studies 43 , we apply CORDA to generate a library of 76 healthy and 20 cancer-specific metabolic reconstructions ., These reconstructions enabled us to identify metabolic similarities amongst healthy tissues as well as key differences between healthy and cancerous tissues ., Furthermore , by sampling the feasible solution space in cancer and healthy models , this library can be used to predict the up- and down-regulation of cancer-specific pathways in cancer metabolism ., The CORDA algorithm is based on a novel approach to identify the dependency of desirable reactions ( i . e . reactions with high experimental evidence ) on undesirable reactions ( i . e . reactions with no experimental evidence ) , a method referred to here as dependency assessment ., In the dependency assessment approach , the metabolic network is modified in four ways ( Fig 1B ) ., First , reversible reactions are split into forward and backward components ., Second , a pseudo-metabolite is added as a product for every reaction in the model ., At this point , undesirable reactions will carry a higher stoichiometric coefficient for this added metabolite , assigning these reactions a higher \u201ccost\u201d ., Third , a reaction consuming this pseudo-metabolite is added to the model ., Finally , a positive lower bound is set for the reaction being tested in order to force that reaction to carry flux ., After modifying the network , FBA ( Materials and Methods ) is performed while minimizing the flux through the cost-consuming reaction ( Fig 1B ) ., The flux distribution returned will then use high cost , undesirable reactions only as necessary for the reaction being tested to carry flux ., Throughout the manuscript , we will refer to high cost reactions predicted to carry flux as associated with the reaction being tested ., In order to identify pathways with the same cost ( i . e . same number of undesirable reactions ) , multiple dependency assessment can be performed while adding a small amount of noise to the cost of each reaction ., Using this dependency assessment , we have developed the CORDA algorithm for the reconstruction of tissue-specific models ( Fig 1C ) ., CORDA takes as input the reactions in the generalized human reconstruction separated into high ( HC ) , medium ( MC ) , and negative ( NC ) confidence groups ( see Materials and Methods section for a detailed description ) ., All remaining reactions in the reconstruction ( i . e . non gene associated reactions or reactions for which no data is available ) are designated as others ( OT ) ., All HC reactions are included in the model , and the maximum number of MC reactions is included while minimizing the inclusion of NC reactions ., While the definition of these four reaction groups are left to the user\u2019s discretion , here we categorize them according to proteomics data from the Human Protein Atlas ( HPA ) 44 , 45 and a methodology used in previous studies 30 , 32 , 37 ( Materials and Methods ) ., To begin the algorithm , all HC reactions are moved into the tissue reconstruction ( RE ) ., In a first step , MC and NC reactions associated with each RE reaction ( which are the same as the HC group at this point ) are identified using the dependency assessment and moved into the RE group ., In a second step , NC reactions associated with a high number of MC reactions are identified and moved into the tissue model , and all remaining NC reactions are blocked ( upper and lower bounds set to zero ) ., Next , all MC reactions still able to carry flux are also moved to the RE group ., Finally , in the final step of the algorithm , all OT reactions associated with any RE reaction are moved to the RE group for the final tissue-specific model ., A detailed description of the CORDA method , including detailed steps , algorithm parameters , and categorization of model reactions is available in the Materials and Methods section ., Following the algorithm validation , we generated a library of 76 healthy and 20 cancer tissue-specific models using CORDA ., In order to generate the most comprehensive models possible , we used the generalized human reconstruction Recon2 9 in the calculation of this library ., Recon2 is one of the most comprehensive human reconstructions performed to date , containing approximately twice the amount of reactions than Recon1 , 1 . 7 times more unique metabolites , and 1 . 2 times more unique genes ., Details of how the reconstructions were calculated can be found in the Materials and Methods section ., Here we introduced a novel tissue-specific algorithm based on Cost Optimization Reaction Dependency Assessment ( CORDA ) ., CORDA relies solely on FBA , rendering it more computationally efficient than previous methods ., CORDA takes a non-parsimonious approach to the reconstruction process , based on the addition of valuable reactions to the reconstruction as opposed to the removal of non-essential reactions ., We showed that the CORDA algorithm provides reconstructions that agree better with experimental data , and that demonstrate better metabolic functionality than prior methods like MBA and mCADRE ., Furthermore , CORDA provides reaction associations that can greatly assist subsequent manual curation , while maintaining the reconstructions only slightly larger than previous parsimonious approaches ., Monte-Carlo sampling analysis also demonstrates that the CORDA generated models provide better predictions of tissue-specific functionality ., In addition to the algorithm validation , we generated a library of 76 healthy and 20 cancer tissue-specific reconstructions , which show considerable agreement with our current knowledge of healthy tissue and cancer metabolism ., First , as an initial validation of our cancer and healthy tissue models , we computationally predicted metabolites that are more frequently essential in cancer models than healthy tissues 15 , 16 , 54 ., Two metabolites were implicated in this analysis: phosphatidylethanolamine ( pe_hs ) and triglyceride ( tag_hs ) , both of which are part of metabolic pathways previously implicated as cancer specific 15 , 16 ., While future work is merited to identify more specific essential metabolites ( e . g . through the inclusion of more comprehensive metabolic tasks in the tissue reconstruction process , and more metabolites in the essential metabolite identification algorithm ) , these results help validate the cancer and healthy tissue reconstructions presented here ., Following this analysis , we demonstrated that the tissue models calculated by CORDA cluster largely according to tissue type ., Similar clustering patterns , based on gene expression and proteomics data , have been observed experimentally ., In particular , based on the expression of over 30 , 000 genes across multiple individuals and tissues , one study found that brain , muscle , and liver tissues , as well as Epstein-Barr virus-transformed lymphocytes , form well defined groups , while skin , adipocytes , and nerve tissues cluster closely together 117 ., A separate study used in the generation of the HPA , based on protein evidence from almost 17 , 000 protein-coding genes in 44 major tissues and organs , also showed that tonsils , spleen , appendix , and lymph node tissues cluster closely together , and that bone marrow clusters separately , but close to these lymphoid tissues 45 ., Evidence supporting many of the apparent exceptions identified by our clustering analysis is also available ., For instance , Uhl\u00e9n et . al . found that brain and liver tissues , along with testis , cluster considerably separate from other tissues and closer to each other , which is what we observed by clustering the CORDA models ., The same study found that prostate tissue clusters closely with salivary glands 45 ., It is worth noting that good agreement with the data by Uhl\u00e9n et . al . is expected , given that a subset of this data was used to generate the tissue-specific models ., This agreement , however , suggests that the similarities between tissues shown by Uhl\u00e9n et . al . 45 and Mel\u00e9 et . al . 117 at the gene expression and protein level are also present in the metabolic enzymes level ., Additionally , breast and salivary glands are known to share many morphological features , and studies have shown that both can give rise to tumors with similar morphology 118 , 119 and myoepithelial differentiation 120 ., These finding can explain why breast and salivary glands clustered with epithelial and myoepithelial cells , as opposed to glandular cells ., Finally , skin cancer and non-Hodgkin\u2019s lymphoma appear frequently as secondary cancers in immunosuppressed individuals 121 , 122 ., This could lead to cancers with significantly different metabolic profiles , supporting their separation from the remaining cancer models ., Clustering of tissue-specific models according to subsystems has also highlighted many differences between healthy and cancerous tissues at the pathway level ( Fig 5 ) ., Evidence for many of these differences are also available in the literature , including: Single reactions included most often in cancer or healthy tissue models were also analyzed , and again literature evidence has been found to support many of them ( Table 3 ) ., Two surprising findings stemmed from this analysis ., First is the predicted down-regulation of CoA synthesis reactions , implicated in both the subsystem and single reaction analyses ., Upon further inspection , we traced this differential inclusion to the gene PPCS , the only gene related to this pathway included in the reconstruction process , which is significantly down-regulated in cancer cells 44 , 45 ., Second , the exclusion of ACOAO7p from most cancer models is also unexpected , since this reaction is part of the fatty-acid oxidation pathway , which has been shown to be up-regulated in cancer tissues 123 , 124 ., Protein evidence of this reaction\u2019s associated gene , ACOX1 , supports this exclusion from cancer models 44 , 45 , suggesting an alternate pathway for palmitoyl-CoA oxidation in cancer tissues ., Finally , Monte-Carlo sampling was also performed in all healthy and cancer tissue models ., Sampling results demonstrate that cancer models show an increased capacity through pathways that are largely up-regulated in cancer metabolism , and a reduced capacity through pathways previously shown to be down-regulated ., Interestingly , mitochondrial respiration showed a slightly reduced and tightly constrained capacity in cancer over healthy tissue models , despite the presence of a larger number of oxidative phosphorylation reactions in cancer models ( Fig 5 ) ., For decades , the role of mitochondrial respiration was thought to be decreased in cancer tissues due to their high glycolytic capacity ., In recent years , however , researchers have shown that this pathway actually plays an important role in cancer metabolism 125 , 126 ., Our results suggest that although a larger number of oxidative phosphorylation reactions are present in cancer models , the activity of this pathway is tightly regulated by cancer metabolism topology ( Fig 6 ) ., On one hand , the low probability of cancer models reaching high cytochrome c oxidase flux values compared to healthy tissues is in line with cancer\u2019s high glycolytic potential ., At the other extreme , the low probability of cancer models reaching relatively low cytochrome c oxidase sampled fluxes is in line with the key role played by mitochondrial respiration in cancer metabolism uncovered in recent years ., We have also investigated the differences in glycine hydroxymethyltransferase capacity in cancer versus healthy tissue models ( S1 Text ) ., This reaction is dependent on two proteins , SHMT1 and SHMT2 , which correspond the cytosolic and mitochondrial isozymes respectively ., Both these proteins have been shown to be up-regulated in cancer over healthy tissue models 127 , although SHMT2 has been so to a greater extent 71 , 127 ., The over expression of these proteins , however , has been shown to be heavily dependent on cancer type 127 ., This claim is supported by the protein expression of SHMT2 in the HPA , where half the cancer types considered have samples with both high and not detected SHMT2 expression ., This variability could explain why the distribution of reactions associated with these genes is similar between cancer and healthy tissue models ( S1 Text ) ., Some cancer types , however , show a considerable increase in SHMT2 expression when compared to their healthy counterparts , including breast , glioma , head and neck , lung , stomach , testicular , and thyroid cancer ., In all but one of these models ( glioma ) , the flux distribution of glycine hydroxymethyltransferase was shown to be considerably shifted towards higher values when compared to their healthy counterparts ( S1 Text ) ., These results demonstrate CORDA\u2019s ability to predict cancer type specific functionality , and not only differences between all cancer and healthy tissues taken together ., The CORDA tissue-specific reconstruction algorithm , as well as the healthy and cancer tissue-specific reconstructions presented here , introduce a new approach for the development of comprehensive tissue-specific metabolic reconstructions ., These reconstructions can generate novel insights into both healthy and diseased human metabolic behavior ., Furthermore , the ability of CORDA to generate models based solely on experimental data , along with the computational efficiency of this algorithm , allows for continuous updates of this library of tissue-specific models , both as more experimental data is updated and made available , and as more comprehensive human metabolic reconstructions are developed ., While previous methods determined reaction dependencies using Flux Variability Analysis ( FVA ) , the CORDA algorithm takes a different approach , referred here as dependency assessment ., The novelty of this method lies not in the LP formulation itself , which is the same as the widely established Flux Balance Analysis ( FBA ) , but in the model modifications performed prior to the application of FBA , as well as the interpretation of the flux distribution returned ., Assuming we want to test whether a given reaction , x , is dependent on the presence of a group of reactions , Y , to carry flux , CORDA proceeds in five steps ., The parameters required for the CORDA algorithm are summarized in Table 4: It is worth noting that the high cost reactions implicated in step five are not necessarily essential for x to carry a flux \u00b1\u03f5 , but are the set of reactions in Y that combined carry the minimal amount of flux ., That is , no flux distribution through the metabolic network allows for the predefined flux through x with a lower combined flux through the reactions of Y . For instance , if one of the reactions in Y deemed associated with x were to be removed from the reconstruction , x could still be able to carry a flux \u00b1\u03f5 , but the combined flux through the reactions in Y would be larger than before ., This way , this dependency assessment does not minimize the number of undesirable reactions to allow x to carry flux , but instead the combined flux through them ., Naturally , however , a lower number of reactions would more easily allow for a lower combined flux ., It is also for this reason that throughout the manuscript we use the term associate instead of dependent ., Throughout the literature , referring to one reaction as dependent on another means the removal of the later from the model negates the former\u2019s ability to carry flux , which is not necessarily the case for the reaction associations defined here ., Another significant advantage of this dependency assessment over previous pruning algorithms is that it requires only the LP problem solved during FBA , rendering it much faster than previous methods ., While MBA and mCADRE used a much faster variation of FVA , it is still considerably more computationally expensive than LP ., Although mCADRE is up to three orders of magnitude faster than MBA 30 , the mCADRE model used in this study took about 4 hours to be calculated in a 2 . 34 GHz CPU with 4G RAM using the IBM CPLEX solver 30 ., The CORDA reconstruction , on the other hand , using the same data and general human reconstruction , took under 30 minutes in a 2 . 66 GHz CPU with 4G RAM using the Gurobi solver 128 ., In order to obtain a tissue-specific metabolic reconstruction using this dependency assessment , we define the Cost Optimization Reaction Dependency Assessment ( CORDA ) algorithm ., This algorithm takes as input the reactions in the generalized human reconstruction divided into four categories: Here , we also allow for the inclusion of metabolic tasks in the HC group ., That is , during the CORDA algorithm , sinks can be specified for given metabolites , and added to the model when tested to ensure the final tissue model can produce these metabolites ., These reactions are added when being tested then immediately removed from the model , so that none of these metabolic task reactions are present when other reactions are being tested , and no two test reactions are present in the model at the same time ., The 32 metabolic tasks included in all CORDA reconstructions in this manuscript are available in S1 Table ., While the definition of these reaction groups can be left to the user\u2019s discretion , here we defined the four groups according to proteomics data from the HPA 44 , 45 , and boolean gene-reaction rules included in the generalized reconstructions Recon1 and Recon2 ., In the HPA , each protein is classified as being Not Detected , or present at Low , Medium or High levels in each tissue ., The gene-reaction association rules are composed of gene names and \u201cAND\u201d and \u201cOR\u201d boolean associations ., For instance , the reaction r0634 in Recon2 has the boolean rule \u201cHADHB AND ( ACAA2 OR ACAA1 ) \u201d , and can therefore be considered active if the gene HADHB , as well as ACAA2 or ACAA1 , are active ., Using this boolean mapping , gene IDs were first replaced by the numerical values -1 , 1 , 2 , and 3 , corresponding to Not detected , Low , Medium and High protein expression levels respectively ., Genes not included in the dataset were assigned a numerical value of zero ., Next , AND boolean associations were replaced by the function MIN; OR boolean associations were replaced by the function MAX; and the expression was evaluated ., Reactions with a final score of 3 were assigned to the HC group; reactions with scores of 1 or 2 were assigned to the MC group; and reactions with a score of -1 were assigned to the NC group ., Reaction scores of -1 , 1 , 2 , and 3 also correspond to Not Detected , Low , Medium , and High expression levels expressed in Fig 2 ., As an example , HADHB is expressed at low levels in cerebellum Purkinje cells; ACAA2 is not detected; and ACAA1 is expressed at high levels ., The r0634 gene-reaction rule mentioned above was then be replaced by \u201cMIN ( 1 , MAX ( -1 , 3 ) ) \u201d , which evaluates to 1 ., During the Purkinje cells reconstruction , this reaction was then placed in the MC group ., Similar approaches have been used by previous studies to assign reaction confidence scores 30 , 32 , 37 ., Aside from the four reaction groups , the CORDA algorithm also requires 5 parameters to operate , which are summarized in Table 4 . To begin the algorithm , all HC reactions are moved into the tissue-specific reconstruction ( RE ) , since these are sure to be included in the final model ., Given the remaining three reaction groups , the CORDA algorithm proceeds in three steps: It is worth noting that one of the main advantages of CORDA over pruning algorithms is the fact that it is independent of how reactions are ordered ., This is due to the fact that reaction associations are calculated for each step , and at the end of each step a decision is made as to which reactions are added to the tissue reconstruction ., This way , the order in which reaction dependencies are calculated does not affect the final tissue reconstruction ., The CORDA reconstructions used for comparison to previous methods were generated using \u03b3 = 105 , the highest cost value tested , \u03ba = 10-2 , the lowest noise value tested , \u03f5 = 1 , a threshold similar to a previous study 32 , n = 5 , to allow for the inclusion of a larger number of OT reactions , and p = 2 ., For a direct comparison to previous methods , the CORDA reconstructions used during the parameter sensitivity analysis , cross-validation , and comparison to previous methods were performed using the same data used for the mCADRE hepatocyte reconstruction ., For the Monte-Carlo sampling analysis , a new reconstruction was generated using the most up-to-date data from the HPA ., Both of these reconstructions are available in the supplemental material ( S1 File ) ., All calculations in this study were performed using the COBRA toolbox 129 and the Gurobi optimizer 128 ., The MATLAB function file used for CORDA reconstructions is also available in the supplemental material ( S2 File ) ., Finally , an example of the CORDA algorithm , applied to small sample networks , is available in S2 Text ., While CORDA requires a number of different parameters , many of these values can be arbitrarily assigned ., For instance , \u03b3 can be arbitrarily large , while \u03f5 and \u03ba can be arbitrarily small ., In order to demonstrate that the CORDA algorithm is robust to a wide range of parameters , we performed 108 hepatocyte specific reconstructions varying all parameters but p ( which was set to be equal to two ) to a wide range of values ., A separate sensitivity analysis of p was performed and is included in S1 Text ., The parameter p can be set in order to define a more or less flexible MC and NC core , and can be set to the user\u2019s discretion ., These 108 reconstructions were based on the generalized human reconstruction Recon1 5 , using the same set of protein expression data ( total of 560 ) and 32 of the metabolic tests used in the mCADRE hepatocyte specific reconstruction 30 ., The data used in this step , as well as the metabolic tests and calculated reaction groups , are available in the supplemental information ( S1 Table ) ., Metabolic tests were included as single reactions in the reconstruction in order to assure the model was able to produce certain metabolites ., Each metabolic test was added to the model when being tested then immediately removed , so that no two tests were present in the model at the same time , and no metabolic test reaction was included when other reactions were being assessed ., Details of this analysis are available in S1 Text ., During the metabolic tasks validation analysis , the exchange rate of the basal inputs carbon dioxide ( co2e ) , water ( h2oe ) , protons ( he ) , oxygen ( o2e ) , phosphate ( pie ) , hydrogen peroxide ( h2o2e ) , superoxide anion ( o2se ) , bicarbonate ( hco3e ) and carbon monoxide ( coe ) were unconstrained ., All other uptake reactions were blocked unless otherwise specified ., For each of the 20 amino-acid recycling tests , the uptake rate of the given amino acid and glucose were set to an arbitrary value , so that the amino-acid being tested was the only source or nitrogen ., Next , the production of urea was set to a strictly positive value , and FBA was performed while optimizing the production of urea ., The same test was also performed for ammonium ., For each of the 21 glucogenic tests , the uptake rate of the given metabolite was set to an arbitrary value , and the production of glucose was optimized ., For both the amino-acid and glucogenic tests , if the model returned a feasible flux distribution the test was considered passed , otherwise it was considered failed ., If the exchange reaction of the given metabolite was not present in the model , the result was considered inconsistent ., The generalized Recon1 reconstruction failed two of the glucogenic tests , so the results of the remaining 19 tests are reported in the main text ., For the eight nucleotide production tests , a sink consuming the given nucleotide was added to the cytosolic compartment ., The model was allowed to uptake glucose and ammonium ( as a source of nitrogen ) , and the flux through the sink was optimized ., If the model was able to produce the given nucleotide , the test was considered passed ., Following the validation of the CORDA algorithm , we generated a library of 76 healthy and 20 cancer tissue-specific reconstructions using the generalized human reconstruction Recon2 9 and the most recent proteomics data from the HPA 44 , 45 ., All reactions used to generate the tissue-specific models are available in S1 Table , and tissue-specific models are available in SBML and MATLAB format at 130 ., The healthy tissue models were calculated using the same classification as described in the algorithm description section , since data for each protein was categorized as not detected , low , medium or highly expressed in each cell type ., For cancer models , the same classification was available for any number of samples for each protein in each cancer type ., In this case , values of -1 , 1 , 2 and 3 were assigned to each sample according to not detected , low , medium or high expression levels respectively , and these values were","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Human metabolism involves thousands of reactions and metabolites ., To interpret this complexity , computational modeling becomes an essential experimental tool ., One of the most popular techniques to study human metabolism as a whole is genome scale modeling ., A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues ., Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment ( CORDA ) to build genome scale models in a tissue-specific manner ., CORDA performs more efficiently computationally , shows better agreement to experimental data , and displays better model functionality and capacity when compared to previous algorithms ., CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process ., Using CORDA , we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions ., These reconstructions identified which metabolic pathways are shared across diverse human tissues ., Moreover , we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues , including up-regulation of folate metabolism , the down-regulation of thiamine metabolism , and tight regulation of oxidative phosphorylation .","summary":"Cellular metabolism is defined by a large , intricate network of thousands of components , and plays a fundamental role in many diseases ., To study this network in its entirety , metabolic models have been built which encompass all known biochemical reactions in the human metabolism ., However , since not all metabolic reactions take place in any given tissue , these generalized models need to be tailored to study specific cell types ., Algorithms developed to date to perform this tailoring process have focused on keeping tissue-specific models as concise as possible ., This approach , however , can remove essential reactions from the model and hamper subsequent analysis ., Here we present CORDA , a tissue-specific building algorithm that yields concise , but not minimalistic , tissue-specific models ., CORDA has many advantages over previous methods , including better agreement with experimental data and better model functionality ., Using CORDA , we developed a library of 76 healthy and 20 cancer-specific models of metabolism , which we used to identify similarities between healthy and cancerous tissues , as well as metabolic pathways that are unique to cancer ., Results of this work provide a broadly applicable tool to model cell- and tissue-specific metabolism , while highlighting potential new pathway targets for cancer therapies .","keywords":"cell physiology, medicine and health sciences, liver, applied mathematics, metabolic networks, cell metabolism, simulation and modeling, algorithms, mathematics, metabolites, network analysis, exchange reactions, pharmacology, drug metabolism, research and analysis methods, computer and information sciences, animal cells, metabolic pathways, hepatocytes, chemistry, pharmacokinetics, biochemistry, cell biology, anatomy, biology and life sciences, cellular types, physical sciences, chemical reactions, metabolism","toc":null} +{"Unnamed: 0":2400,"id":"journal.pbio.1001111","year":2011,"title":"Formation of the Long Range Dpp Morphogen Gradient","sections":"How embryonic cells acquire positional information is a key question in developmental biology ., The concept of morphogen gradients , proposed more than a century ago 1 , 2 , has received substantial experimental validation over the past decade ( reviewed in 3 , 4 ) ., Particularly compelling evidence for their existence comes from the identification of secreted proteins that control cell fates in a concentration-dependent manner ., Localized production of Wnt , Hedgehog , and TGF-\u03b2 family members have been described in numerous tissues and organisms ., However , despite extensive studies on these molecules , the mechanism of transport through tissues and the properties which determine the range of morphogen movement remain poorly understood and controversial ., Here we use the TGF-\u03b2 family member Decapentaplegic ( Dpp ) in the Drosophila wing imaginal disc as a model to address these issues ., Dpp is expressed in a stripe of anterior compartment ( A ) cells along the anteroposterior ( A-P ) boundary of the wing disc , and forms a concentration gradient along the A-P axis of the wing primordium 5\u20139 ., Upon binding to the type I-type II\/Thick veins ( Tkv ) -Punt receptor complex , the intracellular signal transducer Mothers-against-Dpp ( Mad ) becomes phosphorylated , forms a complex with Medea , and enters the nucleus to inhibit the expression of the transcriptional repressor Brinker ( Brk ) 10\u201318 ., These events convert the Dpp morphogen gradient into an inverse gradient of Brk activity that mediates many of the patterning and growth functions of Dpp ( 19\u201321; reviewed in 22 ) ., Although the transduction of the Dpp signal and its role in patterning is well understood , the question of how Dpp is dispersed through its target tissue is still unexplained and thus served as a fertile ground for experimentation and speculations ( reviewed in 23\u201325 ) ., Several mechanisms for Dpp movement through the wing disc tissue have been proposed ., The simplest model assumes that Dpp disperses by passive extracellular diffusion ., However , because the effective diffusion coefficient of Dpp in the wing disc is three orders of magnitude lower than that of a similarly sized molecule in water 26 , and because a secreted form of GFP fails to form a gradient in wing discs 5 , Dpp gradient formation cannot be explained by free diffusion ., Thus a \u201crestricted extracellular diffusion\u201d ( RED ) model , in which Dpp interacts with its receptor and extracellular matrix ( ECM ) proteins , has been proposed ., This model is supported by theoretical 27 and experimental studies 28 , 29 , which implicate glypicans in the ECM as essential components for Dpp movement ., A completely different mechanism by which Dpp may achieve its long-range distribution is receptor-mediated transcytosis ( RMT ) 5 , 30 ., In this model , Dpp does not move through the extracellular space , but rather through the cell bodies by repeated cycles of endocytosis and re-secretion ., First evidence for this model was gathered from analyzing the Dpp gradient in discs containing shibire mutant cell clones , in which dynamin-dependent endocytosis is blocked ., Entchev et al . ( 2000 ) 5 found reduced Dpp levels \u201cbehind\u201d such clones ( i . e . , on the distal side relative to the source ) , suggesting that Dpp is unable to traverse the mutant cells ., Moreover , small lateral clones mutant for tkv also appeared to block Dpp movement 5 , indicating that transcytosis is receptor-mediated ., Although this work has at first been challenged by mathematical modeling and experimental studies 27 , 28 , the transcytosis mechanism was further backed up by theoretical considerations 31 , and by recent work involving FRAP experiments showing that a GFP:Dpp fusion protein is unable to move into a photobleached region when dynamin-dependent endocytosis is blocked 26 ., The two models to explain Dpp movement through an epithelium remain unreconciled , and further analysis is required to determine the contribution of extracellular restricted diffusion or receptor-mediated transcytosis to the formation of the Dpp gradient ., The controversy over Dpp dispersal is augmented by yet another scenario , in which Dpp moves along actin-based filopodia , termed cytonemes , which directly project from the receiving cells to the producing cells 32 , 33 ., Experimental evidence for this mechanism , however , remains elusive , as it is not known yet whether the Dpp ligand is associated with these structures or how a gradient would form along these structures ., Biochemical studies suggest that Dpp binds to the type I receptor Tkv with high affinity 17 , 34 ., Interestingly , all three above mentioned models for Dpp movement rely on the receptor , yet do so in distinct ways ., In the restricted diffusion model , interactions between Dpp and its receptor on the cell surface contribute to the immobilization , subsequent uptake , and degradation of the ligand , thereby impeding Dpp dispersal; in the receptor-mediated transcytosis model the receptor plays an essential role in the uptake ( endocytosis ) and re-secretion ( exocytosis ) of Dpp , and thereby facilitates Dpp movement; and finally in the basic cytoneme model the receptor is used to ferry Dpp along cytonemes ., Here we set out to exploit the pivotal role that the Dpp receptor plays in these mechanisms and manipulated the receptor levels in cell clones to discriminate between the different models of morphogen gradient formation ., We first confirmed in overexpression clones in vivo that Dpp binds to the type I receptor Tkv , but not to the type II receptor Punt ., We then analyzed the effect of tkv mutant clones on the Dpp gradient , and also compared the experimental data to the computed predictions for the RMT and RED models ., While our results challenge the RMT model and are also incompatible with the basic cytoneme mechanism , they are consistent with a RED scenario , in which the majority of Dpp is not bound to Tkv ., Hence we suggest that the major mechanism of Dpp distribution is restricted extracellular diffusion ., The Dpp receptor plays distinct roles for Dpp dispersal in the \u201crestricted extracellular diffusion\u201d ( RED ) and \u201creceptor mediated transcytosis\u201d ( RMT ) models ( see Introduction ) ., Thus , the analysis of Dpp gradient formation in a tissue containing receptor mutant clones promises to discriminate between the two models ., Here we first chose a theoretical approach to investigate the influence of Dpp receptor mutant clones on the Dpp gradient and quantitatively modeled distinct scenarios representing morphogen transport by either the RMT or the RED model ( for a short description of the mathematical modeling , see Box 1; all the analytical details of the model are reported in the Text S1 ) ., Three pools of Dpp ( external-unbound , receptor-bound and internalized ) were described using coupled reaction diffusion equations ., This model has a number of free parameters , which could be constrained though by fixing the relative concentrations of the three Dpp pools , and by the approximation that the Dpp profile exponentially decays outside the production region with a decay length of 20 \u00b5m 26 ., We therefore studied limit case scenarios , in which the relative concentrations of the Dpp pools were fixed and Dpp was either mainly internalized ( 80% of total Dpp ) , mainly receptor-bound , or mainly external-unbound ( cf . Box 1 ) ., Our model involves both , pure external diffusion and receptor-mediated transcytosis 35 ., The latter , within its biologically meaningful parameter range ( cf . Box 1 ) , only had an important influence on the total Dpp gradient in the limit case scenario in which Dpp was mainly internalized , and it could be neglected in the other two limit case scenarios ( cf . Box 1 and Text S1 ) ., The RMT model could therefore be represented by the limit case scenario in which Dpp was mainly internalized ( Figure 1A and D ) , and the RED model by the limit case scenarios in which Dpp was mainly receptor-bound ( Figure 1B and E ) or mainly external ( Figure 1C and F ) ., We then modeled the effects of clones containing either a 10-fold increase of receptor levels ( gain-of-function , GOF ) or entirely lacking the receptors ( loss-of-function , LOF ) on the Dpp gradient in the three different scenarios ., The computed Dpp profiles are represented in Figure 1 ., All three transport scenarios predict for the GOF clones an increase of Dpp within the clone territory ., Thus the GOF situations are not suited to discriminate between the RMT and RED models , but they can be exploited to test in vivo which of the Dpp signaling receptors , the type I receptor Tkv or the type II receptor Punt , binds to Dpp ., LOF clones , however , clearly lead to qualitatively different Dpp profiles for each transport scenario ., Most importantly , Dpp levels behind LOF clones are decreased in the RMT model , but are almost unchanged in the RED models ( Figure 1D to 1F ) ., This outcome reflects the necessity of receptors in transporting Dpp by RMT ., Consistent with this , we find that in a scenario in which the majority of Dpp is intracellular , but Dpp is only transported by extracellular diffusion ( the term describing transcytosis is set to zero ) , Dpp levels behind clones are almost unchanged ( Text S1 , Section 5 . 4 ) ., Thus , analyzing Dpp levels behind receptor mutant clones allows us to clearly discriminate between receptor-mediated transcytosis and restricted extracellular diffusion scenarios ., Moreover , analyzing Dpp levels within receptor LOF clones allows one to discriminate between the limit case scenarios in which Dpp is either mainly \u201creceptor-bound\u201d or mainly \u201cexternal-unbound , \u201d and quantifying the Dpp levels inside GOF clones allows us to further narrow down the ratio of receptor-bound versus unbound Dpp ., The experimental analysis of the Dpp gradient is complicated by the lack of antibodies that detect the mature , processed form of Dpp ., Visualization , however , can be achieved using a GFP-tagged version of Dpp 5 , 9 ., Because the expression of GFP:Dpp in the dpp expression domain requires the Gal4 system , the generation of receptor GOF clones posed a problem ., We therefore developed LexA-based transgenes that allowed to express GFP:Dpp with a Gal4-independent binary expression system 36 , and employed an actin5c>stop>Gal4 flp-out construct to generate and mark clones overexpressing the Dpp receptor ( Figure 2A ) ., The LexA-based GFP:Dpp gradient resembles the Gal4-based GFP:Dpp gradient ( Figure S1 ) , which has been shown to coincide with the endogenous Dpp activity gradient 5 , 9 , 28 ., The analysis of the receptor LOF clones is hindered by the role of Dpp as a survival and growth factor ., Cells within the wing primordium that lack Dpp signaling activity are efficiently eliminated , in particular when they are located close to the source where Dpp levels are normally high 21 , 37\u201342 ., This elimination is caused by the upregulation of Brk in Dpp signaling mutant cells ., Hence we sought to prevent this response by genetically generating cells that not only lose Dpp receptor activity but simultaneously also brk function ., However , since the genes encoding Dpp receptors and Brk are located on different chromosome arms , we combined BAC-recombineering and the phiC31 site-specific integration system 43\u201345 to position a genomic brk rescue construct at chromosomal site 22A , on the same chromosome arm where the type I Dpp receptor tkv is located ., Mitotic recombination at the base of this chromosome arm in a brk mutant background enabled us to generate tkv brk double mutant clones ( for details see Figure 2B ) ., The Dpp ligand signals through the Tkv-Punt typeI-typeII receptor complex ., Upon ligand-receptor binding , Tkv becomes phosphorylated at a glycine\/serine rich domain , and in turn phosphorylates and activates Mad 14 , 46 ., While both receptors are necessary for the signal relay , in vitro studies suggest that Dpp binds to Tkv with high affinity , but not to Punt 14 , 17 , 34 ., Here , we reassess these observations in vivo , by analyzing the effect of tkv and punt overexpression on Dpp distribution ., As mentioned before , our theoretical clonal study predicts that any increase in receptor levels will also lead to increased Dpp levels , irrespective of the transport model ( Figure 1A to 1C ) ., To confirm the functionality of the transgenes , we first assessed the levels of Tkv by use of an antibody and estimated that the UAS-tkv transgene results in an approximately 10-fold increase at the protein level ( Figure S2 ) ., We then verified that overexpression of tkv as well as punt ectopically activates Dpp pathway activity by monitoring the phosphorylation state of Mad ( pMad ) ( Figure 3A and 3B ) ., Finally , we analyzed the effect of tkv and put overexpression on the Dpp gradient ., Throughout this work we monitor the Dpp gradient by directly measuring the GFP:Dpp fluorescence intensities ( in green ) and by GFP antibody staining ( in gray ) ., In order to avoid detection of unsecreted Dpp in producing cells and elution of GFP:Dpp from the ECM during fixation , we added the GFP antibody prior to fixation , followed by a 1-hour incubation at room temperature ( for details , see Materials and Methods ) ., Strikingly , only tkv overexpressing clones but not punt overexpressing clones modulate the Dpp profile and lead to an increase of GFP:Dpp levels inside the clones ( Figure 3C and 3D; additional plots for each genotype are shown in Figures S3 and S4 ) ., Thus the comparison of the effect of tkv versus punt overexpression clones on the Dpp profile confirms biochemical studies and suggests that Tkv , but not Punt , binds to Dpp ., Moreover , because the amplification of Dpp signal transduction per se ( also occurred in UAS-punt clones ) does not influence the Dpp profile , we can exclude the possibility that the observed effects in Tkv GOF clones are indirect , and argue that they are a direct consequence of Dpp-Tkv binding ., Although the GOF studies do not enable distinguishing between the RMT and RED models , the different amounts of Dpp in GOF clones can serve to discriminate between the two RED scenarios ., Our data favor the \u201cexternal-unbound limit case scenario\u201d and suggest that approximately 60%\u201380% of Dpp is not bound to Tkv ( Figure S4G ) ., As described above , receptor LOF situations were created by simultaneous removal of the receptor and brk ., First we tested whether the alteration of Dpp signaling activity in such clones ( loss of Dpp transduction and loss of brk function ) would affect the Dpp profile , and generated Mad , brk double mutant clones , in which Dpp transduction but not receptor activity is lost ., The Dpp gradient across such clones remains intact ( Figure 4A , B\u2014additional plots are shown in Figure S5 ) ., As a consequence of epithelial folds that occasionally arise at the boundaries of such clones , in some cases a slight modulation of the Dpp profile was observed ( Figure S5 ) ., We then examined the Dpp gradient in discs with tkv , brk LOF clones ., We used the amorphic tkv8 allele , which contains a stop mutation in the extracellular domain of tkv at position 144 of the tkv-PA transcript 47 ., As expected , Dpp signal transduction activity was abolished in tkv\u2212 brk\u2212 clones ( Figure 5B\u2013C ) ., However , the Dpp gradient in such discs was not significantly altered; tkv\u2212 brk\u2212 clones resembled Mad\u2212 brk\u2212 clones ( Figure 4C\u2014additional plots are shown in Figure S6 ) ., The same results were obtained using a conventional antibody staining protocol to detect the Dpp gradient ( Figure S8 ) ., The observation that the Dpp levels inside and behind tkv\u2212 clones are not significantly reduced contradicts the receptor-mediated transcytosis model , and concurs with the restricted extracellular diffusion model , in which the majority of Dpp is not bound to Tkv ( see Figure 1 ) ., Apart from Tkv the Drosophila genome encodes another type I receptor , Saxophone ( Sax ) , which has been implicated in Dpp signaling ., Although Sax preferentially interacts with and mediates signaling by the BMP ligand Glass Bottom Boat ( Gbb ) and shows significantly lower affinity to Dpp than Tkv 10 , 34 , 48 , 49 , it could still in principle serve as a Dpp receptor ., To exclude that Sax takes over some functions of Tkv in tkv brk mutant clones , for example shuttling Dpp through mutant cells via receptor-mediated transcytosis , we also analyzed the Dpp gradient in sax null discs containing tkv\u2212brk\u2212 clones ( see Materials and Methods ) ., GFP:Dpp levels were not decreased , neither inside nor behind the clones , strengthening our conclusions that GFP:Dpp does not move via receptor-mediated transcytosis ( Figure 4D\u2014additional plots are shown in Figure S7 ) ., Finally we also analyzed the effect of receptor LOF clones on the endogenous Dpp gradient in wing discs ., We tested tkv\u2212brk\u2212 and tkv\u2212brk\u2212sax\u2212 genotypes and monitored Dpp pathway activity at the level of Mad phosphorylation and target gene expression ., Since Dpp could potentially reach the distal side of such clones by being transported around , rather than through , mutant territory , we purposely collected and analyzed clones with a large dorsoventral extension ., Both readouts , however , show that Dpp signaling is not reduced behind such receptor mutant clones ( Figure 5A\u2013C ) ., To completely eliminate the possibility that this pathway activity stems from Dpp that migrated via clone-surrounding wild-type cells , we identified rare situations where patches of wild-type cells are fully encircled by mutant cells ., As shown in Figure 5C and 5F , even cells in these \u201cislands\u201d exhibit substantial Dpp signaling activity ( for a 3-D reconstruction of these clone islands , see Figure S9 ) ., These findings provide unequivocal evidence that Dpp can disperse through receptor-free territory and hence refute a need for receptor-mediated transcytosis ., Dpp acts as a long-range morphogen , which spreads along the A-P axis of the wing primordium to form a signaling gradient ., Here we studied how receptor mutant clones affect the Dpp gradient in different transport models , and compared theoretical calculations with experimental data ., One outcome of the modeling was the prediction that RMT and RED mechanisms could be discriminated by analyzing Dpp levels behind receptor mutant clones ., While in the transcytosis model these levels should be significantly decreased , they would be almost unaltered in the diffusion model ., This difference stems from the uptake of Dpp by its receptors , which is an essential feature for morphogen transport by RMT , but not by RED ., Our experimental results revealed that neither GFP:Dpp levels nor Dpp signaling activity is reduced behind receptor mutant clones , excluding a significant role for receptor-mediated transcytosis in Dpp gradient formation ., Important support for this conclusion was provided by situations where \u201cislands\u201d of wild-type cells received Dpp signal despite being surrounded by mutant tissue , ruling out the possibility that Dpp reaches the distal side of receptor mutant clones by being transported around the clones ., When analyzing the GFP:Dpp distribution in mosaic tissues , we also found that the Dpp levels are not significantly reduced within receptor mutant clones ., While this outcome further argues against the RMT model , it is consistent with the \u201cexternal-unbound limit case scenario , \u201d representing RED with the majority of Dpp not being bound to Tkv ., Indeed , in the GOF experiments the ratio of unbound Dpp could be narrowed down to approximately 60%\u201380% ., If transcytosis is modeled in a receptor-independent manner ( as shown in Text S1 ) , the effects on Dpp distribution by receptor mutant clones do not differ significantly from those in the restricted extracellular diffusion scenario ., Thus , receptor-independent transcytosis , for example via fluid phase uptake , remains a possible mechanism for Dpp gradient formation ., Several other studies , however , support the restricted extracellular diffusion model ., Based on theoretical grounds , Lander et al . ( 2002 ) 27 proposed that diffusive mechanisms for Dpp gradient formation are more likely than non-diffusive ones ., Moreover , experimental studies on heparan sulfate proteoglycans ( HSPGs ) , in particular glypicans , demonstrated the necessity of an intact ECM for morphogen movement 50 , 51 ., In the Drosophila wing disc , clones mutant for the glypicans Dally and Dally-like ( Dlp ) disrupted the formation of the Dpp gradient 28 ., Dally was also shown to bind Dpp 52 , to stabilize it on the cell surface 53 , and to influence its mobility 54 , 55 ., However , although the evidence that glypicans assist extracellular diffusion of Dpp seems compelling , alternative or additional functions of glypicans in Dpp distribution cannot be excluded ., For example , a recent study 56 suggests that apically localized Dlp binds to the Wingless ( Wg ) morphogen in the Wg producing region , undergoes internalization , and thereby redistributes Wg to the basolateral compartment where Wg spreads to form a long-range gradient ., It is possible that recycling of glypicans is also involved in Dpp relocalization and that this process is important for Dpp movement ., Consistent with such a notion , Kicheva et al . ( 2007 ) 26 reported that dynamin-dependent endocytosis is necessary for Dpp movement ., Blocking such a ubiquitous cellular machinery , however , not only inhibits the recycling of receptors and glypicans , but may also change the composition and distribution of glypicans in the ECM , which in turn might impede extracellular diffusion ., Given that the phenotypes of our receptor clones fully conform to the simplest model of Dpp movement along the ECM ( restricted extracellular diffusion ) , we favor the view that the main function of glypicans for Dpp gradient formation is to facilitate Dpp diffusion along the ECM ., Our observation that receptor mutant clones do not have a major effect on the Dpp gradient contradicts previous observations by Entchev et al . ( 2000 ) 5 ., In their study , ablation of tkv in small lateral clones leads to an accumulation of Dpp at the side of the clone facing the source , arguing for a block of Dpp movement within such clones 5 ., The different results could be explained by the presence of brk in their genetic setup ., The ectopic up-regulation of brk in tkv mutant clones , which in most cases leads to clone elimination 41 , 42 , most likely also causes drastic changes in the transcriptional program in \u201cescaper\u201d cells ., Thus the sharp increase in GFP:Dpp levels at the proximal edge inside tkv mutant clones ( facing the Dpp source ) could be accounted for by increased levels of Dpp binding proteins , a theory which is supported by the fact that Dpp accumulation was strictly clone-autonomous and not in cells ahead of the clones 27 ., In our experimental setup , we avoided such secondary effects by simultaneously removing tkv together with brk ., As our negative control ( Mad brk clones ) shows , the signaling state of these cells ( Dpp signaling off , no Brk ) does not significantly alter the Dpp profile ., Transport along cytonemes is another proposed model for the dispersal of Dpp ( Ramirez-Weber and Kornberg , 1999 ) 33 ., In its simplest implementation , this model assumes that imaginal disc cells form filopodial extensions towards the Dpp producing region and that Dpp is shuttled along these extensions by binding to Tkv 32 ., In this scenario , Tkv GOF clones would not only lead to an increase of receptors inside the clones , but also along the cytonemes , and thus affect the Dpp profile also ahead of the clones ., This , however , was not observed in our experiments ( Figure 3D and Figure S4 ) , and we therefore favor the restricted extracellular diffusion model over the cytoneme model for Dpp gradient formation ., During development morphogens function as short-range or long-range signals in order to specify cell fates within a tissue ., For example , during wing disc development the range of Hh signaling is relatively short compared to that of Dpp , with a functional range of approximately 10 cells versus 40 cells , respectively 7 , 9 , 57 , 58 ., It is likely that properties of the transport system are important determinants of the range of a morphogen ., In the restricted diffusion model , morphogen spreading is impeded by ECM proteins and cell surface receptors , which efficiently trap their ligand at the cell surface and direct it to degradation ., Thus one mechanism to control the range of a morphogen gradient is regulating the receptor levels 27 ., Indeed , the Hh as well as the Dpp system appear to make use of this strategy to regulate their range ., The Hh signal limits its range by upregulating the expression of its binding receptor Patched ( Ptc ) , while the Dpp signal broadens its range by downregulating the expression of its receptor Tkv 6 , 57 , 59 ., The effects of our Tkv LOF and GOF clones on the Dpp profile suggest that the majority of Dpp is not bound to the receptor Tkv ., It is tempting to speculate that the Dpp-Tkv binding properties represent an additional property of the Dpp signaling system that facilitates the formation of a long-range gradient , by assuring that the majority of Dpp remains in a free and unbound state ., Just like lower receptor levels , a lower binding constant would contribute to the spread of Dpp , due to reduced immobilization and degradation of Dpp ., It remains to be seen if the ratio of bound to unbound ligand differs for long- versus short-range morphogens and if this ratio represents a general means to regulate the range of morphogen gradients ., The following transgenes and mutants are described in detail on flybase: UAS-mCherry-CAAX , tub-Gal80ts , hsp70-flp , act5C>y+>Gal4 , arm-lacZ , ubi-GFP ( S65T ) nls , brkM68 , tkv8 , saxP , madB1 , and FRT40 ., Furthermore we used the transgenes: UAS-tkv 59 , UAS-punt 7 , lexO-GFP:Dpp 36 , dpp-LG 36 , dpp-Gal4 5 , and UAS-GFP:Dpp 5 ., In order to introduce the manipulated brk locus into the fly genome , the locus was transferred from the original BAC ( BACR35J16 ) into the attB-Pacman vector , which allows the retrieval of large fragments up to 133 kb ( Venken et al . , 2006 ) 44 ., The BAC clone was ordered from BACPAC Resources , and the BAC DNA isolated according to the protocol provided ., Homology arms of 500 bp , corresponding to the 5\u2032 and 3\u2032 ends of the entire brk genomic locus and spanning parts of the upstream unc-119 gene and the downstream Atg5 gene , were cloned into the attB-Pacman vector ., The attB-Pacman vector was linearized and introduced into recombination-competent SW102 carrying the modified BAC ., The retrieval of the modified DNA fragment into the linearized attB-Pacman was carried out by recombination-mediated gap-repair ., This plasmid was then injected into Drosophila melanogaster embryos ., Site-specific integration of the attB-Pacman vector into the landing site 51D on chromosome 2L was performed as described 43 ., Immunostainings were performed using standard protocols ., Images were collected with a Zeiss LSM710 confocal microscope ., ImageJ was used to analyze the images; z-stacks are shown in maximum projections ., Intensity plots were generated based on the extraction of the intensities in the ROIs and using Mathematica ., For the 3-D reconstruction of z-stacks , Imaris was used .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"The TGF-\u03b2 homolog Decapentaplegic ( Dpp ) acts as a secreted morphogen in the Drosophila wing disc , and spreads through the target tissue in order to form a long range concentration gradient ., Despite extensive studies , the mechanism by which the Dpp gradient is formed remains controversial ., Two opposing mechanisms have been proposed: receptor-mediated transcytosis ( RMT ) and restricted extracellular diffusion ( RED ) ., In these scenarios the receptor for Dpp plays different roles ., In the RMT model it is essential for endocytosis , re-secretion , and thus transport of Dpp , whereas in the RED model it merely modulates Dpp distribution by binding it at the cell surface for internalization and subsequent degradation ., Here we analyzed the effect of receptor mutant clones on the Dpp profile in quantitative mathematical models representing transport by either RMT or RED ., We then , using novel genetic tools , experimentally monitored the actual Dpp gradient in wing discs containing receptor gain-of-function and loss-of-function clones ., Gain-of-function clones reveal that Dpp binds in vivo strongly to the type I receptor Thick veins , but not to the type II receptor Punt ., Importantly , results with the loss-of-function clones then refute the RMT model for Dpp gradient formation , while supporting the RED model in which the majority of Dpp is not bound to Thick veins ., Together our results show that receptor-mediated transcytosis cannot account for Dpp gradient formation , and support restricted extracellular diffusion as the main mechanism for Dpp dispersal ., The properties of this mechanism , in which only a minority of Dpp is receptor-bound , may facilitate long-range distribution .","summary":"Morphogens are signaling molecules that trigger specific responses in cells in a concentration-dependent manner ., The formation of morphogen gradients is essential for the patterning of tissues and organs ., Decapentaplegic ( Dpp ) is the Drosophila homolog of the bone morphogenic proteins in vertebrates and forms a morphogen gradient along the anterior-posterior axis of the Drosophila wing imaginal disc , a single-cell layered epithelium ., Dpp determines the growth and final size of the wing disc and serves as an ideal model system to study gradient formation ., Despite extensive studies the mechanism by which morphogen gradients are established remains controversial ., In the case of Dpp two mechanisms have been postulated , namely extracellular diffusion and receptor-mediated transcytosis ., In the first model Dpp is suggested to move by diffusion through the extracellular matrix of a tissue , whereas in the latter model Dpp is transported through the cells by receptor-mediated uptake and re-secretion ., In this work we combined novel genetic tools with mathematical modeling to discriminate between the two models ., Our results suggest that the Dpp gradient forms following the extracellular diffusion mechanism ., Moreover , our data suggest that the majority of the extracellular Dpp is free and not bound to its receptor , a property likely to play a role for the long-range gradient formation .","keywords":"developmental biology, genetics, biology, computational biology, molecular cell biology, genetics and genomics","toc":null} +{"Unnamed: 0":1046,"id":"journal.pcbi.1004008","year":2015,"title":"A RESTful API for Accessing Microbial Community Data for MG-RAST","sections":"Over 110 , 000 metagenomic data sets have been uploaded and analyzed in MG-RAST 1 since 2007 , totaling over 43 Terabases ( TBp ) ., Data uploaded falls in three classes: shotgun metagenomic data , amplicon data , and , more recently , metatranscriptomic data ., The MG-RAST pipeline normalizes all samples by applying a uniform pipeline with the appropriate quality control mechanisms for the various data sources ., Uniform processing and robust sequence quality control enable comparison across experimental systems and , to some extent , across sequencing platforms ., With the inclusion of standardized metadata 2 MG-RAST has enabled meta-analysis available through its web-based user interface at http:\/\/metagenomics . anl . gov ., The user interface provides an easy-to-use way to upload data access data via download or interface , perform analyses , and create and share projects ., As with most GUIs , however , there are limitations to what can be done ., Examples of this include the number of samples processed in a single analysis , access to complete metadata , and easy access to raw data and quality metrics for each sample ., As part of the DOE Systems Biology Knowledgebase project ( KBase ) we have implemented a web services application programmers interface ( API ) that exposes all data to ( authenticated ) programmers , enabling users to access available data and functionality through software applications ., User access to MG-RASTs internal data structures is now possible ., The MG-RAST API enables programmatic access to data and analyses in MG-RAST without requiring local installations ., With the new API , users can authenticate against the service , submit their data , download results , and perform extensive comparisons of data sets ., We chose to use the Representational State Transfer ( REST ) 3 architecture ., The REST approach allows download of data in ASCII format , which allows users to query the system via URLs and returns MG-RAST data objects in their native format ( e . g . similarity tables or sequence files ) ., For structured data ( e . g . metadata or project information ) the MG-RAST API uses JSON ( Javascript Object Notation , a widely used standard ) as its data format ., Using this approach users can use simple tools to download data files to their machines or view the JSON in their web browsers using one of the many available JSON viewers ., In addition , many programming languages have libraries for convenient HTTP interaction and JSON conversions ., This article focuses on describing the architecture used - the underlying components of a web services architecture , their interactions , and the data used for their operation ., REST has several key advantages for system scalability ., Unlike more traditional remote procedure call methods , REST APIs make the semantics of requests visible at the HTTP protocol layer ., This makes the system easier to scale , optimize , and harden through the use of HTTP level appliances providing security , caching , and proxy capabilities ., REST APIs also have useful properties in terms of client adoption ., They have a minimal number of prerequisites and any language with HTTP and JSON support or command line utilities , such as curl , can easily integrate with the design ., The MG-RAST RESTful API supports introspection and versioning ., In order to access a specific version of the API , the version number must be added to the base URL ., The base URL for all API calls is http:\/\/api . metagenomics . anl . gov ., Calling the base URL of the API without any options returns a list and description of available resources; calling a resource without any options returns a description of the resource and its request options with example calls ., The MG-RAST pipeline accepts sequences in a variety of formats from most DNA sequencing platforms and transforms all sequences using automated pipelines ( see Figure 1 ) ., The pipeline performs quality control , protein prediction , clustering , and similarity-based annotation on nucleic acid sequence data sets ., The analyses provided by MG-RAST rely , to some extent , on comparison with external protein databases , maintained as a single data product in the M5nr 4 , and enabling users to switch annotation sources and thus naming conventions used for annotation at analysis time ., Using the M5nr database , MG-RAST provides links to all major sequence databases and , for example , allows linking from metagenomic sequences to complete genomes ( see Table 1 for a list of available namespaces ) ., Users are provided access to these MG-RAST resources as well as to analysis results being produced ( public data and the users own data ) ., Table 2 lists the high-level objects that can be accessed; in addition , users can upload sequence and metadata into their own private MG-RAST staging area ., Some objects ( e . g . , metagenome , metadata , project , M5nr database ) will seem intuitive , while others are different from what most users would expect ( e . g . , download , annotation , matrix ) ., We have designed these additional objects to allow rapid access to sets of sequences or analysis results related for a data set ( download ) , annotated sequences or BLAT results for a data set ( annotation ) , and abundance information for many data sets ( matrix ) ., Most of the API calls are simply URLs , which can be entered in the address bar of a web browser to perform the download through the browser ., These URLs can also be used with a command line tool like curl , in programing-language-specific libraries , or in command line scripts ., The examples in the Results section illustrate the use of each of these methods ., The example scripts are available on in the supplementary materials and on GitHub ( https:\/\/github . com\/MG-RAST\/MG-RAST-Tools ) along with other useful illustrative scripts ., MG-RAST enables users to extract data based on functional or taxonomic annotations ., The necessary functionality is provided by two API calls ., The first API ( Box 1 ) call lists all metagenomes with certain metadata fields and functional contents , the second API call extracts all requested reads from a given metagenome ., The following example script exploits these two API calls to produce a file with sequences annotated as proteases , using SEED annotations from all samples from marine environments ., The reads are labeled with the originating data set and the read identifier , as well as the underlying similarity result ., Download allows users to extract analysis result files from MG-RAST ( Box 2 ) ., The following example below shows how to download BLAT 6 results for a given metagenome ., The inbox is a staging area where users can upload metadata and sequence files and manage their data ., This requires a MG-RAST account and user authentication ( Box 3 ) ., An authentication token can be created through the user preferences in MG-RAST ., Users can retrieve abundance profiles ( Box 4 ) based on functional or taxonomic profiles ., Default output format is BIOM ., As mentioned earlier , we use a M5-based nonredundant database to perform annotations ., Here is an example of extracting the UniProt database entry record for a given sequence in a metagenome ( Box 5 ) ., Using the M5nr , we identify the UniProt database record most similar to the sequence of a given feature ., Users can retrieve project information ( Box 7 ) by using project ID and output as a JSON formatted file ., Available information about individual samples , including IDs and metadata , can be accessed as shown in Box 8 ., Using the search resource , users can search for data they want to retrieve ., Queries can be made for , metadata , function , and taxonomy ( Box 9 ) ., Complex queries are supported ., In MG-RAST , all data is initially private ., Users who submit data can decide to share that data with specific users ( by typing in an email address for the users ) or make the data publicly available ., Both actions require the provision of standard-compliant minimal metadata by the submitting user ., The API provides access to both public and nonpublic data , requiring users to submit authentication tokens for access to private data ., Authentication tokens can be obtained via the MG-RAST web interface through the user preferences page and are valid for up to 14 days ( Box 10 ) ., The token serves as login and password for the API ., Below is an example of how to use the tokens in three different scenarios ., Users can invalidate a token at any time by generating a new one ., Note that accessing a remote site through an XMHttpRequest requires support for Cross-Origin Resource Sharing ( CORS ) compliance and Preflight Request ., CORS requires the remote site to accept the local sites origin ( AccessControl AllowOrigin ) ., For Preflight Requests , if an HTTP request from a browser adds a custom header to the request ( in the example \u201cAUTH\u201d ) , the browser first makes an OPTIONS request to the largest server , inquiring whether AccessControlAllowHeaders allows this header and whether AccessControlAllowMethod allows the request method ( GET\/POST ) .","headings":"Introduction, Design and Implementation, Results","abstract":"Metagenomic sequencing has produced significant amounts of data in recent years ., For example , as of summer 2013 , MG-RAST has been used to annotate over 110 , 000 data sets totaling over 43 Terabases ., With metagenomic sequencing finding even wider adoption in the scientific community , the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis , such as comparative analysis between multiple data sets ., Moreover , although the system provides many analysis tools , it is not comprehensive ., By opening MG-RAST up via a web services API ( application programmers interface ) we have greatly expanded access to MG-RAST data , as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data ., This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects ., As part of the DOE Systems Biology Knowledgebase project ( KBase , http:\/\/kbase . us ) we have implemented a web services API for MG-RAST ., This API complements the existing MG-RAST web interface and constitutes the basis of KBases microbial community capabilities ., In addition , the API exposes a comprehensive collection of data to programmers ., This API , which uses a RESTful ( Representational State Transfer ) implementation , is compatible with most programming environments and should be easy to use for end users and third parties ., It provides comprehensive access to sequence data , quality control results , annotations , and many other data types ., Where feasible , we have used standards to expose data and metadata ., Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users ., We present an API that exposes the data in MG-RAST for consumption by our users , greatly enhancing the utility of the MG-RAST service .","summary":"Metagenomic sequencing has produced significant amounts of data in recent years ., For example , as of summer 2013 , the MG-RAST metagenomics analysis system has been used to annotate over 110 , 000 data sets totaling over 43 Terabases ., With metagenomic sequencing finding even wider adoption in the scientific community , the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for comparative analysis ( i . e . , number of data sets ) ., Moreover , although the system provides many analysis tools , it is not comprehensive ., By opening MG-RAST up via a web services API ( application programmers interface ) we have enabled a programmatic way for others to use their bioinformatics tools with MG-RAST data .","keywords":"biodiversity, ecology, biology and life sciences, computational biology","toc":null} +{"Unnamed: 0":301,"id":"journal.pcbi.1002635","year":2012,"title":"Modeling of Gap Gene Expression in Drosophila Kruppel Mutants","sections":"The segmentation gene network in early Drosophila embryo provides a powerful model system to study the role of genes in pattern formation ., This network solves the fundamental problem of embryonic patterning: how to establish a periodic pattern of gene expression , which determines both the positions and the identities of body segments 1 , 2 ., The developmental process which performs this task is called segment determination ., The fruit fly segments are arranged sequentially along the anterior-posterior axis of the embryo ., All segments are determined simultaneously during the blastoderm stage , just before the onset of gastrulation 3 ., The segmentation genes have been subdivided into 4 classes based on their mutant phenotype 1 , 2 ., The maternal coordinate genes are expressed from the mother and form broad protein gradients in the anterior , posterior or terminal regions of the embryo 4\u20137 ., Other genes , which belong to gap , pair-rule and segment-polarity classes , are zygotic , i . e expressed in the embryo ., Most of segmentation genes encode transcription factors , which in turn regulate the expression of many other genes , including segmentation genes themselves ., It was demonstrated by genetic analysis that segmentation genes form a hierarchical regulatory cascade , in which genes in higher layers ( e . g . maternal coordinate genes ) regulate genes in lower layers ( e . g . gap genes ) , but not vice versa ., In addition genes in the same hierarchical level interact with each other ., The gap gene system establishes discrete territories of gene expression based on regulatory input from a long-range protein maternal gradients , Bicoid ( Bcd ) and Hunchback ( Hb ) in the anterior and Caudal ( Cad ) in the posterior of the embryo 8 , 9 ., Gap genes Kr , kni , hb , gt and tll are expressed in from one to three domains , each about 10\u201320 nuclei wide 10 ., Early gap gene expression of the trunk gap genes Kr , hb , gt and kni is established through feed-forward regulation by maternal gradients , after initial establishment gap domain borders sharpen , moreover both sharpening and maintenance of gap domain boundaries requires gap-gap cross-regulatory interactions 11 ., This process is accompanied by the anterior shift of Kr , kni and gt expression domains in the posterior region of the embryo 10 , 12 , 13 ., Kr plays a central role in segmental pattern formation as indicated by strong alteration of expression patterns of almost all zygotic segmentation genes in mutants 14\u201316 ., Kr null mutants show deletion of thoracic and anterior abdominal segments as well as frequent mirror duplications in the abdomen 15 , 17 ., At the level of gene expression this mutation manifests in the large shift of posterior Gt domain , resulting in overlap of positions of posterior Gt and Kni domains 15 , 18 ., During sharpening and maintenance stage of gap gene expression Kr acts a repressor of gt and hb 15 , 19 , 20 ., The repression of gt , which expression domains are strictly complementary to those of Kr , is strong , while the effect of Kr on hb is more subtle 21\u201324 ., It was observed in assays with cell lines carrying reporter constructs that the regulatory effect of Kr is concentration-dependent: Kr monomer is transcriptional activator , while at high concentrations Kr forms a homodimer and becomes a repressor that function through the same target sequence as the activator ., However it is difficult to establish whether such an effect occurs at physiologically relevant regulators concentrations 25 ., The segmentation gene network is one of the few examples of developmental networks studied using data-driven mathematical modeling 13 , 26\u201328 ., These models fall into two categories ., The phenomenological models do not require any a priory information about regulatory mechanism 29 , 30 and try to reconstruct it by solving the inverse problem of mathematical modelling ., A major shortcoming of these models is that their parameters have no explicit connection to the genomic DNA sequence ., The second modelling approach seeks to extract information about gene regulation from the sequences of cis-regulatory regions and the measured or inferred binding of sequence-specific transcription factors to these elements 26\u201328 , however it still neglects major features of the transcription process , such as chromatin structure and modifications , binding site orientation and proximity to transcription start site , etc ., Current simplifications and unknown features limit the predictive power of these models , but more powerful and complex models may be generated in future using better datasets such as in vivo transcription factors occupancy , relative accessibility of different DNA regions , in vivo data on interplay between different transcription factors , nucleosome and chromatin remodelling enzymes ., In this paper we apply a phenomenological model known as gene circuits to reconstruct the gap gene network in Kr null mutants ., This model considers a row of nuclei along the A-P axis of the embryo ., Between nuclear divisions the model describes three basic processes , namely protein synthesis , protein decay and diffusion of proteins between neighboring nuclei of syncitial blastoderm ., A few basic assumptions about eukaryotic transcriptional regulation were incorporated into the model ., First a sigmoid regulation-expression function was used to introduce regulatory inputs into the model ., Secondly , each regulatory interaction can be represented by a single parameter which sign indicates the type of regulatory interaction: activation ( if it is positive ) , repression ( if negative ) , no interaction ( if it is close to zero ) ., Third it was assumed that regulatory inputs are additive and independent of each other ., The gene circuit models were successfully applied to correctly reproduce the quantitative features of gap gene expression in wild type 12 , 13 ., This study revealed five regulatory mechanisms responsible for sharpening and maintenance of gap gene expression domains: broad activation by maternal gradients of Bcd and Cad; gap gene auto-activation; strong mutual repression between gap genes which show complementary expression patterns ( hb and kni; Kr and gt ) ; weaker asymmetric repression between overlapping gap genes ( Hb on gt , Gt on kni , Kni on Kr , Kr on hb and Hb on Kr ) and repression by terminal gene tll at the embryo termini ., The asymmetric repression between overlapping gap genes is responsible for shifts of gap gene domains in the posterior region of the embryo ., It is important to note that the wild type gap gene circuit model has the predictive power when molecular fluctuations of the input factors are taken into account 31 , 32 ., It is evident that to understand the gap gene network we need not only to describe the mechanism underlying its functioning in intact state , but also to comprehend what happens when certain stimuli or disruptions occur ., Recently Papatsenko and Levine ( 2011 ) constructed a dynamic model based on a modular design for the gap gene network , which involves two relatively independent network domains with elements of fractional site occupancy ., This model requires only 5\u20137 parameters to fit quantitative spatial expression data for gap gradients in wild type and explained many expression patterns in segmentation gene mutants obtained in studies published mainly in the late 1980s and early 1990s ., However these patterns were characterized qualitatively by visual inspection , that may not capture the fine details of gene expression ., For example , previous studies based on qualitative visual analysis of gene expression patterns showed that a Kr null mutation results in large shift of posterior Gt domain , overlap of positions of posterior Gt and Kni domains and decrease in the level of gt expression in the second half of cycle 14A 15 ., Here we obtained a large dataset on gap gene expression in Kr null mutants and extracted quantitative gene expression data using a data pipeline established previously 33 ., The analysis of this data allowed us to characterize the expression of other gap genes at unprecedented level of detail ., In particular we showed that the significant decrease in the level of gene expression in the second half of cycle 14A is common to all gap gene expression domains ., This novel biological result seems counterintuitive , because genetics studies show that Kr acts as a repressor , and therefore should come under close scrutiny ., The most serious limitation of the gap gene circuit models is their inability to correctly reproduce the expression patterns in trunk gap gene null mutants at quantitative level , although a theoretical study had shown previously that such prediction is possible if gene circuits models were fit to simulated , noise-free data 29 and simulating null mutants of the terminal gap genes tll and hkb was successful 12 , 13 , 34 ., A variety of reasons could be responsible for the failure , of which , from our point of view , the most important is the oversimplified representation of transcriptional regulation in the model ., Indeed , as was already mentioned above , the action of regulator on its target gene is represented by a single parameter , whereas it is well known that the cis-regulatory elements ( CRE ) of segmentation genes often reproduce only one of expression domains of an endogenous gene when placed upstream of a reporter gene 35\u201337 ., Moreover different CREs of one gene can have different transcription binding site composition , i . e . different regulatory inputs ., For example , computational prediction of transcription factor binding sites showed that regulatory sequences which drive expression of gt in the anterior and posterior domains have different transcription binding site composition: the anterior gt domain has regulatory inputs from Bcd and Kni , while the posterior domain contains inputs from Hb and Cad , which are absent in the sequences responsible for anterior expression 37 ., Similar to gt , two CREs essential for hb expression in anterior domain and in central stripe and posterior domain differ in transcription binding site composition 38\u201340 ., It is evident that current gene circuits models do not consider the mechanism of gene regulation at such a level of detail ., This defect does not interfere with the ability of these models to fit gap gene expression patterns in wild type , however in mutant background with deficient set of regulators the failure of the model to take into account such features may suddenly become essential ., To avoid such problems we use a revised model which builds on separate treatment of domains with different regulatory inputs ., This is possible by narrowing down the spatial domain of the model and considering only the posterior half of the blastoderm ( region from 47 to 92% embryo length ( EL ) ) , in which each of the trunk gap genes is expressed in one domain ., As opposed to previous gap gene circuit models , which have a constant Bcd gradient and did not consider Cad data from late time points just before the onset of gastrulation 12 , 32 , and similar to approach used in 30 , we implement Bcd as a time-variable input and use data on late Cad expression to represent the rapidly changing expression dynamics of these two genes ., After cleavage cycle 12 Bcd nuclear gradient starts to decay 41 ., Analysis of data from fixed embryos showed that Bcd protein reached its maximal level near the beginning of cycle 14A and thereafter starts to decrease slowly that culminates in an almost twofold decline by gastrulation 10 ., From the second quarter of cleavage cycle 14A onward the cad expression in abdominal region start to gradually decrease and by gastrulation cad expression in the posterior region sharpens to a stripe which spans from 75 to 90% EL 10 ., The gene circuit models do not require any assumption about regulatory interactions within a gene network ., Instead the regulatory topology of the network is obtained by solving the inverse problem of mathematical modeling , i . e . by fitting the model to the data 29 ., To obtain the estimates for regulatory parameters that predict a specific network topology in mutants we fitted the model to gap gene expression patterns in wild type and in embryos with homozygous null mutation in Kr gene simultaneously ., The logical justification of such an approach is to use the parameters of the wild type gap gene network as specific constraints on regulatory weights in mutants in order to obtain the consistent parameter estimates for both genotypes on one hand and on the other hand to preserve the characteristic features of gene regulation in mutant ., The parameter estimates obtained in such a way were further studied by applying identifiability analysis , that confirmed that fitting to two genotypes simultaneously substantially increases the statistical significance of parameter values ., We use the modeling framework outlined above to explain the characteristic features of gap gene expression in Kr null mutants and in the posterior half of the blastoderm ., In what follows we describe the expression patterns of gap genes in Kr null mutants and analyze quantitative gene expression data extracted from these patterns ., We then use these data as input to a new gap gene circuit model ., We show that in contrast to earlier models , this model correctly reproduces the characteristic features of gap gene expression in Kr mutants ., In particular , it reproduces correctly the greater shift of posterior Gt domain than in wild type and significant decrease in the level of gap gene expression in the second half of cycle 14A ., We next obtain the parameter estimates for the model ( and hence the predicted gap gene network topology in wild type and mutant ) and perform identifiability analysis to understand how reliable are these estimates ., We study the dynamical behavior of our model and analyze the role of individual regulatory loops in gap gene expression in wild type and mutants ., We show that a remarkable transformation of gap gene expression patterns in Kr mutants can be explained by dynamic decrease of activating effect of Cad on a target gene and exclusion of Kr gene from the complex network of gap gene interactions , that makes it possible for other interactions , in particular , between hb and gt , to come into effect ., Our model also predicts the derepression of the anterior border of Hb posterior domain in Kr;kni double mutants , that is established in the absence of key repressors ., We validate this prediction and show the correctness of network topology inferred in this work ., In wild type Drosophila embryos gap genes are expressed as large intersecting domains along the A-P axis ( Figure 1A ) ., In general , all these domains exhibit similar temporal dynamics: after formation they start to grow , reach maximum expression levels around mid-cycle 14A and decline by gastrulation ., In the course of cycle 14A gap gene domains change their positions and shift to the anterior 10 ., We have shown that the asymmetric gap-gap cross-repression with the posterior dominance is responsible for these shifts 12 ., In Kr null mutants gap gene expression is significantly altered ( Figure 1B ) ., It has been previously reported that in these mutants the posterior domain of gt is expanded towards the center of the embryo 42 , 43 ., We detect that in the course of cycle 14A the posterior domain of gt shifts dynamically on 15% embryo length ( EL ) in the anterior direction and overlaps with Kni domain ., Thus , by gastrulation , the difference in position of gt domain in mutants and wild type embryos constitutes approximately 10% EL ., The anterior shift of the Kni domain maxima in Kr mutants constitutes only 1 . 8% EL ., Hb posterior domain in mutants is formed at the beginning of cycle 14A and shifts on about 3% EL in the anterior direction during this cycle ., Thus , the positional dynamics of this domain in mutants and wild type is similar ., The level of hb posterior expression in mutants is nearly the same as in wild type until time class 3 , but declines afterwards ., By gastrulation it constitutes only a half of the wild type expression level ( Figure 1C , F ) ., Gt posterior domain is initially lower than in wild type , it grows up to time class 4 and significantly declines thereafter ( Figure 1D , G ) ., The level of kni expression remains constantly low throughout cycle 14A ( Figure 1E , H ) with a slight decrease at the very end of this cycle ( not shown ) ., The features of the gap gene expression in Kr mutants described above raise many questions ., Namely , the regulatory mechanisms underlying a decrease in gene expression levels , as well as a much larger shift of Gt posterior domain should be explained ., In the following sections , we describe our modification of the gene circuit model 12 , 13 , 29 , 44 that correctly reproduces the gap gene expression in Kr mutants and hence can serve as a tool to answer these questions ., The gene circuit model used in this work differs from previous implementations in several aspects ., First we narrowed down the spatial domain of the model by considering only the posterior half of the blastoderm ( region from 47 to 92% embryo length ( EL ) ) , in which each of the trunk gap genes is expressed in one domain ., This allows us to avoid the inherent limitation of the model , in which the action of regulator on its target gene is represented by a single parameter ., Secondly , as opposed to previous gap gene circuit models which have a constant Bcd gradient and did not consider Cad data from late time points just before the onset of gastrulation 12 , 32 , we implement Bcd as a time-variable input and use data on late Cad expression to represent the rapidly changing expression dynamics of these two genes at that stage ., We used bcd and cad profiles from FlyEx database for cycle 13 and eight temporal classes of cycle 14A as external inputs to our model equations ., We used the modeling framework outlined above to explain the characteristic features of gap gene expression in Kr null mutants and in the posterior half of the blastoderm ., To obtain the estimates for regulatory parameters that predict a specific network topology in mutants the model was fitted to gap gene expression patterns in wild type and in embryos with homozygous null mutation in Kr gene simultaneously ., DEEP method was applied to minimize the sum of squared differences between experimental observations and model patterns 45 , 46 and find all parameters of model equations , i . e . regulatory weights , synthesis rates , decay and diffusion constants , that allow to reproduce the characteristic features of gap gene expression in Kr null mutants as closely as possible ., We performed over 200 runs with different initial parameter approximations and control variables ., The search space was sampled uniformly for each parameter in the interval defined by biologically relevant limits ., Two step procedure was applied to construct the ensemble of parameter sets ., On the first stage , the residual mean square ( RMS ) was checked and the sets with RMS less than 5% of the maximal gene expression value ( equals 255 in our data ) were accepted for further analysis ., Secondly , we inspected the model expression patterns visually ., Consequently , the ensemble of 11 parameter sets was obtained that correctly reproduces the dynamics of gene expression in wild type and mutant embryos , in particular the decrease of gap gene expression levels and the anterior shift of gt domain ., To infer the topology of regulatory network we classified the estimates of regulatory weights ( the elements of T and E matrices ) into the following three categories: \u2018activation\u2019 ( parameter values greater than 0 . 005 ) , \u2018repression\u2019 ( parameter values less than \u22120 . 005 ) and \u2018no interaction\u2019 ( between \u22120 . 005 and 0 . 005 ) ., This leads to a predicted regulatory topology of the network based on which category a majority of parameter estimates falls into ( summarized in Figure 2 ) ., There are several networks in the ensemble called consensus networks , in which the signs of regulatory parameters coincide with the predicted network topology inferred from the fits ., Figure 2A shows simulation results together with experimental data and Table S1 presents parameters for one of such networks ., It is evident that in spite of some patterning defects especially at early stages the model correctly reproduces the dynamics of gene expression in wild type and mutant embryos ., Some basic features of the gap gene network topology in wild type and mutant become immediately obvious from inspection of Figure 2B and Table S1 ., First , Cad activates zygotic gap gene expression ., Second , hb , Kr , kni , and gt show autoactivation ., Third , Bcd activates Kr , gt , kni in all parameter sets , however in case of hb it shows activation in approximately the same number of circuits as it shows repression ., Fourth , all reciprocal interactions among trunk gap genes are either zero or repressive ., An important exception is activation of hb by Gt ., Finally tll represses kni and Kr and weakly activates gt ., The identifiability analysis was conducted with respect to parameters of the model estimated by fitting to experimental data ., The model considers the time evolution of protein concentrations of four gap genes hb , Kr , gt , and kni in two genotypes: wild type and in embryos with homozygous null mutation in Kr gene ., The total parameter set that minimizes the cost functional 0 consists of 40 parameters and is denoted as ., The set includes four subsets , , , and of 10 parameters each , that describe regulatory action on each target gene ., In mutants the model is only fitted to quantitative gene expression data for 3 genes , gt , hb and kni , and hence the parameters from are estimated using data points from wild type embryos only ( half of all data points ) ., All the other parameter subsets are estimated from the whole dataset ., Due to lack of space we denote the elements of inter-connectivity matrices T and E by single-letter notations of genes , namely , H , K , G , N , B , C , and T stand for gt , Kr , gt , kni , bcd , cad , and tll , respectively ., For example , characterizes the regulatory action of Hb on kni ., The sensitivity of the model solution to parameter changes is characterized by the size of confidence intervals ., The confidence intervals ( 2 ) ( see Methods ) are constructed under the assumption of normally distributed error in data , that is not satisfied for gene expression data ., The error in data almost linearly increases with the mean concentration that is typical rather for the Poisson than for normal distribution ., To make the error independent of the mean we applied the variance-stabilizing transform to both data and model solution ., The transformed objective functional was minimized using the parameter estimates obtained for non-transformed functional as initial values for the optimization procedure ., The new solutions were found in a very close vicinity of initial parameter sets ., The 11 parameter sets , which minimize the transformed model functional , are given in Table S2 and will be referred to as circuit parameter sets ., The newly estimated regulatory weights were classified into regulatory categories as described in subsection Gene Network Topology ., This classification results in predicted regulatory topology of the network ( Figure 2C ) , which is largely the same as in Figure 2B , however not all the entries in two tables coincide ., The estimates of some parameters not-uniquely determine the type of gene regulation in different circuits , i . e . in some circuits the parameter estimates exceed the threshold 0 . 005 in absolute value , while in the others are below the threshold ., It is also true for new parameter sets , however , the number of such circuits is different than those given in Figure 2B ., The confidence intervals for individual parameters are constructed in the vicinity of the model solution ., The results for one representative circuit are presented in Figure 3 ., Most of the values of regulatory parameters are very close to zero , and it is important to make sure whether the value ( more precisely , the sign ) of a regulatory parameter is significant ., The hypothesis that the parameter estimate is non-zero is tested as follows: if a confidence interval includes both positive and negative values , the hypothesis is rejected , otherwise , accepted ., Our classification method to infer the topology of regulatory network used in this work , was based on comparison of the values of regulatory parameters with the threshold ., However , as it has already been mentioned , estimates of some parameters take values , which only exceed the threshold in part of circuits ., By exploration of confidence intervals for these parameters we came to the conclusion , that almost all the estimates , that are close in absolute value to the threshold , are insignificant ., This result explains the discrepancies between the network topologies presented in Figures 2B and 2C: the conclusions about the type of gene interaction that are based on insignificant parameter estimates are unreliable ., The analysis of confidence intervals conducted for all the circuits ( Figures S2 and S3 ) allowed us to refine the predicted regulatory network topology ( Figure 2C ) ., We classify parameters as insignificant activation\/repression if the parameter estimates are positive\/negative in almost all the circuits but their confidence intervals contain zero , and hence the parameter sign cannot be identified ., As a result the non-identifiable regulatory parameters are , , , , , , and and therefore we cannot draw any conclusion about these interactions ., Interestingly most of these interactions involve Kr as a target gene or Bcd as a regulator of gap gene domains located in the posterior of the embryo ., Other regulatory parameters are well identifiable and , hence , the identifiability analysis corroborates the gene network topology drawn from classifying parameter values only ., It should be stressed that the confidence intervals provide the full information about the parameter estimates only in case of parameter independency , otherwise the intervals are overestimated ., Moreover , strong correlation between parameters may lead to their non-identifiability , because a change in one parameter value can be compensated by the appropriate changes of another parameters and , hence , does not significantly influence the solution ., In view of this we investigate the dependencies between parameters using the collinearity analysis of the sensitivity matrix ., This method allows to reveal correlated and hence non-identifiable subsets of parameters ., The sensitivity matrix defined in Methods was analyzed in the vicinity of 11 points in the parameter space that define the optimal model solutions ., The collinearity index ( equation ( 3 ) in Methods ) was computed for all the subsets of dimension k of the parameter set ., The threshold value for was chosen equal to 7 ., This value in case of corresponded to approximately 99% pairwise Pearson correlation between columns of the sensitivity matrix ., The method allowed to detect subsets of dimension 2 and 3 with the collinearity index exceeding the threshold value , i . e . subsets of poorly or non-identifiable parameters ., Most of parameter combinations in these subsets were the same for all 11 circuits ( see Table 1 ) ., Almost all the pairs of parameters in subsets of dimension 2 belonged to , i . e . were related to Kr target gene ., To explain this result we additionally compute the collinearity indices between columns of the upper half of the sensitivity matrix , that only include the partial derivatives computed at 1532 wildtype observations ., The method detected much more parameter subsets with collinearity indices greater than the threshold , that included the parameters characterizing the input of all four genes ., The parameters of the full model fitted to two genotypes thus are better identifiable than those of the model that solely describes the wild type data ., However , the parameters from cannot be identified from the full model as are just estimated from the wild type observations ., The subsets of dimension 3 with the highest collinearity indices ( ) are also common for the most of the circuits ., Most of these combinations are related to gt and Kr , i . e . , include parameters from and ., The two approaches applied to characterize parameter identifiability are closely connected and complement each other ., By exploration of confidence intervals we can see to what extent the model solution is sensitive to parameter changes and test the significance of parameter sign , but this method does not give any explanations to the sources of non-identifiabilities ., One of such explanations can be provided by collinearity analysis ., The correlation between parameters revealed by this approach can clarify insignificance or unreliability of parameter estimates with large confidence intervals ., For example , we derive non-identifiability of from the large size of its confidence interval and at the same time the analysis of the sensitivity matrix allows us to detect the subset of parameters and with high mean collinearity index equal to 7 . 76 ( see Table 1 ) ., Thus , poor identifiability of can be explained by correlation between two regulatory parameters , that is reflected in their high collinearity index ., The gene network topology inferred from both classifying the parameter values and parameter identifiability analysis is presented in Figure 2C ., As Figure S4 shows it is largely in agreement with topologies predicted by earlier models 13 , 31 , 34 , 47 ., Strong constraints for mutual repression are present for kni and hb , which show complementary expression patterns ., Besides , strong repressive action exert both Kr on hb and Hb on gt ., Some previous models had predicted the repressive action of Kr on hb 31 , while most showed no interaction 13 , 34 , 47 ., Many repressive interactions between gap genes show weaker constraints toward repression , and interestingly we have found very weak or no dynamical constraints for repression of Gt on Kr , the interaction with strong constraint for repression in all wild type gene circuit models 13 , 31 , 34 , 47 , 48 ., In addition our model predicts weak repressive interactions between Kni and gt and Kr and kni ., In earlier gap gene circuit models the first interaction was predicted as no interaction 13 , activation 34 or activation in about half the circuits , and repression in the other half 31 ., The repressive action of Kr on kni is only observed in our model , all other models predicted no interaction between the two genes ., In addition in current model Bcd shows activation of hb in approximately the same number of circuits as it shows repression , while in all previous models this interaction was predicted as activation ., Weak activation of gt by Tll is now present in 10 parameter sets , while previous results predicted this interaction as repression ., Finally our model predicts no interaction between Tll and hb ., Some previous models had classified this interaction as activation 31 , 34 , while other predicted it as repression or no interaction 13 , 48 , 49 ., Null mutation in Kr gene results in strong alteration of expression patterns of almost all zygotic segmentation genes ., In gap gene network this mutation manifests in significant reduction of gap gene expression levels in cycle 14A , as well as in large shift of posterior Gt domain and overlap of positions of posterior Gt and Kni domains ., Previous gap gene circuit models fail to correctly model the gap gene expression patterns in the embryos homozygous for null mutation in a trunk gap gene ., A new model introduced here correctly reproduces ","headings":"Introduction, Results, Discussion, Methods","abstract":"The segmentation gene network in Drosophila embryo solves the fundamental problem of embryonic patterning: how to establish a periodic pattern of gene expression , which determines both the positions and the identities of body segments ., The gap gene network constitutes the first zygotic regulatory tier in this process ., Here we have applied the systems-level approach to investigate the regulatory effect of gap gene Kruppel ( Kr ) on segmentation gene expression ., We acquired a large dataset on the expression of gap genes in Kr null mutants and demonstrated that the expression levels of these genes are significantly reduced in the second half of cycle 14A ., To explain this novel biological result we applied the gene circuit method which extracts regulatory information from spatial gene expression data ., Previous attempts to use this formalism to correctly and quantitatively reproduce gap gene expression in mutants for a trunk gap gene failed , therefore here we constructed a revised model and showed that it correctly reproduces the expression patterns of gap genes in Kr null mutants ., We found that the remarkable alteration of gap gene expression patterns in Kr mutants can be explained by the dynamic decrease of activating effect of Cad on a target gene and exclusion of Kr gene from the complex network of gap gene interactions , that makes it possible for other interactions , in particular , between hb and gt , to come into effect ., The successful modeling of the quantitative aspects of gap gene expression in mutant for the trunk gap gene Kr is a significant achievement of this work ., This result also clearly indicates that the oversimplified representation of transcriptional regulation in the previous models is one of the reasons for unsuccessful attempts of mutant simulations .","summary":"Systems biology is aimed to develop an understanding of biological function or process as a system of interacting components ., Here we apply the systems-level approach to understand how the blueprints for segments in the fruit fly Drosophila embryo arise ., We obtain gene expression data and use the gene circuits method which allow us to reconstruct the segment determination process in the computer ., To understand the system we need not only to describe it in detail , but also to comprehend what happens when certain stimuli or disruptions occur ., Previous attempts to model segmentation gene expression patterns in a mutant for a trunk gap gene were unsuccessful ., Here we describe the extension of the model that allows us to solve this problem in the context of Kruppel ( Kr ) gene ., We show that remarkable alteration of gap gene expression patterns in Kr mutants can be explained by dynamic decrease of the activating effect of Cad on a target gene and exclusion of Kr from the complex network of gap gene interactions , that makes it possible for other interactions , in particular between hb and gt , to come into effect .","keywords":"systems biology, developmental biology, gene regulation, gene expression, regulatory networks, molecular genetics, biology, computational biology, genetics, gene networks, pattern formation, genetics and genomics","toc":null} +{"Unnamed: 0":428,"id":"journal.pcbi.1003450","year":2014,"title":"Epigenetics Decouples Mutational from Environmental Robustness. Did It Also Facilitate Multicellularity?","sections":"Understanding the evolution of major transitions in the complexity of organisms remains one of the key challenges in modern biology 1 , 2 ., In particular , the transition to multicellularity required the evolution of several innovations at the molecular level in order to satisfy three key requirements: cell-to-cell adhesion , cell-to-cell signaling , and cellular differentiation 3 , 4 ., Such molecular innovations can often be facilitated by genomic duplication and subsequent specialization 5 as well as other evolutionary processes such as exaptation 6 , 7 and coevolution 8 ., In the case of cellular differentiation , the evolution of epigenetic gene regulation is arguably the most important; enabling molecular innovation during the expansion of the Metazoa 9 , 10 ., Of course , molecular innovations are also subject to multiple constraints which may be imposed externally through the environment 11 or internally , for example as a consequence of the developmental process 12 ., Here we will be concerned with robustness as an evolved internal constraint ., Robustness in biological systems is the property of persistent behavior despite genetic and environmental insults ., Previous studies , using gene regulatory network models , have shown that networks will evolve robustness to genetic mutations under conditions of stabilizing selection 13 , 14 ., This result has been experimentally verified in RNA viruses 15 , yeast 16 , 17 , and in the process of RNA folding 18 ., In addition to genetic mutations , organisms are exposed to environmental changes ., Previous studies using gene regulatory network models have shown that environmental and mutational robustness are positively correlated and are therefore expected to increase together under stabilizing selection 16 , 17 , 18 , 19 , 20 , 21 ., Furthermore , studies exploring robustness of miRNA sequence have shown that mutational robustness develops directly in response to evolving environmental robustness 22 ., Indeed computational models of cell differentiation also show the presence of robustness 23 ., However , invariance to the environment poses an obstruction to cell differentiation in multicellular organisms where internal environmental factors dictate cell fate decisions ., Highlighting the Metazoan cell differentiation dependence on the environment is recent work showing that changes in a small number of key growth factors is capable of altering cell fate decisions 24 , 25 ., For example , changes in expression of ct4 , Sox2 , Klf4 and c-Myc can drive conversion of fibroblasts to cardiomyocytes 26 ) ., Furthermore , the developmental impact of environmental sensitivity can be observed in the developing human fetus which is most vulnerable to environmental chemicals such as alcohol within the first few weeks of pregnancy 27 , 28 , 29 ., Therefore , how did multicellular organisms develop sensitivity to the internal environment , promoting cell differentiation , while retaining mutational robustness ?, The available evidence suggests that the transition to multicellularity was accompanied by major innovations in epigenetic regulation 30 , 31 , 32 ., Indeed chromatin states are in large part responsible for the gene expression differences across cell types 33 , 34 , 35 , 36 ., Post-translational modification of histones alters chromatin structure to encourage or repress transcription ., A key group of proteins responsible for marking regions for transcriptional repression during development are the Polycomb Group Proteins ( PcGs ) ., Early studies elucidated the general functionality of this protein group in developing Drosophila embryos ., In particular it was found that the chromosomal regions targeted by PcGs were transcriptionally repressed only if genes in the region were exhibiting low levels of expression when the PcGs became active 37 ., In this manner the PcGs were found to be responsible for turning off discrete sets of genes in different cell types depending on expression levels during early development ., For example , MyoD , a transcription factor required for myogenic commitment , is unable to access its binding sites in non-myoblast cells due to PcG dependent methylation 38 ., In addition , it has been shown that activation of muscle-specific genes in the vicinity of the PcG binding site prevent the PcGs from hypermethylating the site , thus allowing MyoD to exert transcriptional activation effects ., This functionality has motivated speculation that PcGs may have aided in the transition from a unicellular to a multicellular world by promoting differential expression in cell differentiation 39 , 40 ., Supporting this hypothesis , evolutionary analysis of the PcG Polycomb Repressive Complex 2 ( PCR2 ) has revealed that homologs of the core components ( E ( Z ) , ESC , Su ( z ) 12 , and Nurf55 ) existed prior to multicellular lineages but were rarely found present as a functional complex in single cell organisms ( although it is likely the last common unicellular ancestor of Metazoa did have all the components in place ) 39 , 40 , 41 ., In addition , Saccharomyces cerevisiae and other unicellular fungi with multicellular ancestors do not have the full set of functional homologs , correlating the loss of PcGs with reversal of multicellularity 39 ., To explore how a dynamic epigenetic process such as chromatin modification affects robustness and cell differentiation we have extended a well-established gene regulatory network model 13 , 42 with an epigenetic mechanism modeled on the Polycomb system ., In accordance with previous results we find that in the absence of an epigenetic mechanism both mutational and environmental robustness co-evolve by increasing together ., However , with the introduction of the Polycomb mechanism we see a decoupling of environmental and mutational robustness ., Mutational robustness still increases under stabilizing selection in concordance with experimental results but environmental robustness decreases , thus increasing responsiveness to the environmental cues ., In order to evaluate the capacity for cell differentiation in the model , we quantified the ability for producing alternative steady states ( outputs ) in response to novel environmental conditions ( inputs ) ., Consistent with the increase in environmental sensitivity we found that the Polycomb mechanism greatly facilitated the ability to create new input\/output mappings , suggesting a strongly increased capacity for generating alternative cell fates ., Our results suggest a clear link between epigenetic regulation and cell differentiation in that the epigenetic mechanism allows a gene regulatory network to be altered dynamically , effectively creating multiple networks out of a single regulatory architecture ., In order to study the evolution of a Polycomb-like epigenetic mechanism we extended an established model of evolution in gene regulatory networks 13 , 42 ., Briefly ( see Methods for details ) , the model functions on two levels: population dynamics and gene regulatory network ( the genotype-phenotype mapping ) ., At the lower level of the genotype-phenotype mapping , the genotype of each individual is represented as a gene regulatory network of genes ., Gene expression dynamics are initiated by an input vector , leading to a steady state of length this defines the phenotype ( individuals not reaching steady state have zero fitness ) ., At the population dynamics level individuals undergo iterations of mutation , reproduction and selection ., We measure mutational robustness as described previously 13 , 14 , 43 by randomly mutating an entry in the interaction matrix ( of size ) and comparing the effect on the phenotype to that for the unmutated matrix ., Following Ciliberti et al 19 , we measure environmental robustness by introducing random changes into the input vector and similarly considering the effect on the phenotype ., Epigenetic regulation through chromatin remodeling is postulated to be a key mechanism through which a single genome can dynamically change gene expression to produce distinct stable cell types 30 , 31 , 32 ., To determine the effect of epigenetic mechanisms on the two distinct forms of robustness we incorporated Polycomb group ( PcG ) -like activity into the gene regulatory model ., Here , we assume that Polycomb is expressed beginning at time during development ., Susceptibility to Polycomb for each gene ( representing the presence of cis-acting Polycomb Response Elements ) is determined by such that from time onwards , the expression of each gene is repressed by the Polycomb protein if and its expression level falls below a threshold level ., This behavior is modeled upon the known function of the Polycomb Repression Complex 1 ( PRC1 ) in the Drosophila embryo where the Hox genes ( whose initial expression is determined by transiently expressed Gap genes ) are permanently repressed by PRC1 , thus maintaining anterior\/posterior expression patterning 44 ., More formally the expression dynamics are defined by: ( 1 ) Where is the sigmoid function defined as and is a Heavy-side function that equals 0 if x<0 and 1 if x\u22650 ., Susceptibility to Polycomb for each gene is set to for all genes at the beginning of each simulation ( generation 0 ) but is subject to change at a mutation rate such that genes can gain or lose susceptibility ( i . e . the variable transitions between 0 and 1 with probability in each offspring ) ., Here we are modeling the evolution of the Polycomb Response Element ( PRE ) , a small canonical base sequence that is targeted by PcGs in higher metazoans 45 , 46 ., In order to assess the impact of the Polycomb mechanism on the evolution of robustness , we measured both environmental and mutational robustness in simulations over 1000 generations ., First we set the mutation rate for susceptibility to thus eliminating the possibility of evolving any epigenetic function ., In keeping with previous results 19 we found that under these conditions both mutational and environmental robustness are positively correlated and increase in tandem ( Figure 1 , blue lines ) ., However , this relationship was inverted when we allowed the Polycomb mechanism to evolve by setting ( the same mutation rate per individual used for the matrix of regulatory interactions ) ., Here mutational robustness increased while environmental robustness decreased ( Figure 1 , red lines ) ., These results were consistent across a wide variety of parameter values ( see Figure S1 ) ., In addition , we modeled the results while allowing for a changing network topology ( links could be created and destroyed ) and found that mutational and environmental robustness remained decoupled see Figure S2 ) ., In summary , we have shown that introducing a Polycomb-like epigenetic mechanism into a transcriptional regulation network model is capable of decoupling environmental and mutational robustness ., Cell differentiation in multicellular biological organisms usually begins with expression changes in a small number of key differentiation genes in response to environmental cues , often upstream genes in the pathway ., Expression of a upstream gene will in turn trigger larger sets of downstream genes that distinctly define each cell type ., One of the best understood examples of this is during muscle differentiation where the key gene MyoD regulates hundreds of downstream targets 47 including important differentiation factors such as muscle specific creatine kinase ( MCK ) 48 and muscle acetylcholine receptor ( AChR ) alpha subunit 49 ., In multi-celled organisms that use epigenetic regulation , cell types are further determined by chromatin changes that lock the cell fate ., In terms of our model , the early differentially expressed genes can be considered as alternative inputs for our system and the transcription of genes in the differentiated cell can be considered the output ., We therefore assume that each input\/output mapping ( \u2192 ) is the equivalent of the cell type and evaluate whether an evolving network is capable of handling multiple input\/output mappings ( \u2192 , \u2192 and so on ) and in particular whether the capacity to create new mapping is altered by epigenetic functionality in the model ., We therefore allowed a population to evolve under stabilizing selection for generations ( =\\u200a100 in main text results; longer values were tested as well . See Figures S1 and S2 ) and then evaluated whether a randomly selected individual from the population could accommodate a new input state and produce a novel output state ( see Methods ) ., The input for the new state was chosen by flipping ( 0\u21941 ) each binary input with probability ( in main text results , though values up to give similar results \u2013 see Table S1 ) ., The corresponding stable output , , was compared to the initial output , , and to the founders initial output , , using a normalized distance measures and respectively ( see Methods ) which had to be greater than 0 . 05 in both cases for to be considered a new unique output state ., If no such significantly different output was found , we repeated the attempt to create a new input\/output mapping ( random individual , random input state ) up to total of 100 times before considering the network unable to create a new input\/output state ., Without epigenetic functionality we found that the system was unable to create a new input\/output in 47% of 200 cases ., However , with Polycomb it was able to find a new input\/output 100% of the time ( Fig . 2 inset ) , a highly significant difference ( p\\u200a=\\u200a8 . 62\u00d710\u221222 , Fishers exact test ) suggesting that introducing the epigenetic mechanism enabled networks to evolve a strongly increased capacity for adding new input\/output states ., Multi-stability was found after testing an average of just 7 . 55 individuals compared to the case without Polycomb where we were unable to detect multistability even after testing 100 individuals ., Furthermore , the difference is highly robust to different values in the Polycomb threshold ( ) as shown in Figure 2 , since starting with values of =\\u200a0 . 05 we already have a capacity above 99% of accepting a new state across many parameter values ., These results are in accordance with the result described above showing that environmental robustness without Polycomb increases through evolutionary time , making the system less likely to produce a unique output even when inputs are altered ., However , with Polycomb the network becomes more sensitive to changes in the environment ( represented here by changes in the input vector ) and consequently acquires the capacity for producing a new output when the inputs are perturbed ., ( In addition , we tested adding multiple new input\/output mappings , see SI Table S1 ) ., The role of Polycomb Group Proteins ( PcG ) , discovered in Drosophila , include transcriptional repression of genes showing low expression during early development , a key process in cell differentiation 37 ., Homologs of the core functional proteins comprising the PRC-2 complex ( a component of PcGs ) are present in some eukaryotic unicellar ancestors but are nearly ubiquitous in the multicellular world 39 , 40 , 41 ., The phylogenetic distribution of PcG components and their role in development suggests that Polycomb has played a key role in enabling cell differentiation 40 ., In order to study the evolutionary consequences of Polycomb functionality we incorporated Polycomb functionality into a modeling framework 13 , 42 which captures key features of gene regulatory networks in an evolutionary context ., The evolution of novel mechanisms for controlling gene expression has evolved in tandem with more complex life forms ., Prokaryotes possess cis-regulatory elements , operons and some species show evidence of histone style chromatin structure 9 ., As the Eukarya evolved from simpler unicellular organisms to complex Metazoa , controlling specialized cell functionality became essential ., At the same time , the repertoire of gene expression control expanded to include mechanisms such as methylation , acetylation , ubiquination , and small RNA mediated transcriptional regulation ( i . e . RNAi ) , all of which sculpt gene expression for specialized function 9 ., As each of these mechanisms arose , they often functioned \u201corthogonally\u201d of the others in a mechanistic sense ., For example , repression of gene expression can be achieved independently either by cis-regulation ( recruitment of repressing TFs to regulatory region ) or by histone modifications at the relevant locus ., These methods result in the same outcome , transcriptional repression , but work through wholly independent mechanisms ., By utilizing chromatin states , Polycomb effectively modifies the architecture of the gene regulatory network in real time ( Figure 3 ) ., As such Polycomb simplifies the architecture by carving out segments of the network to respond to different environmental cues ., Polycomb-targeted genes that exhibit low expression during early development ( expression of PcGs begins as early as 3 hours post-fertilization in the Drosophila embryo ) are continuously repressed through heterochromatin formation , nullifying their associated cis-regulatory effects ., However , under a different set of environmental conditions ( i . e . , in another developmental context ) the same genes might not be enveloped in heterochromatin , allowing the cis-regulatory elements to control expression ., This method allows cells to use a single set of transcriptional regulators ( PcGs ) and yet create very different patterns of expression in distinct cell types ., For example , undifferentiated mesodermal cells require the expression of MyoD to become myoblast cells ., However , MyoD is repressed through the activity of Polycomb ( in particularPRC-2 ) unless the necessary genes ( controlled via adjacency to the PREs ) are expressed early in cell division 38 ., In this manner Polycomb inhibits MyoD in all cells except those destined to become myoblast cells ., This design pattern effectively stratifies a single network into many networks , suggesting a functional role for Polycomb in the evolution of cell differentiation , a key requirement for the evolution of multicellularity ., To explore the development of differential expression we evaluated the capacity of the model to accommodate multiple input-output mappings , as in previous studies 50 ., We found the ability to adopt multiple input\/outputs is greatly facilitated with the functionality of Polycomb ( Figure 3 ) ., This finding is consistent with the evolutionary data showing that the essential components of Polycomb function are almost ubiquitous in the multicellular world but are rarely all present simultaneously in unicellular organisms 39 , 40 , 41 again strengthening the hypothesis that Polycomb played a key role during the evolution of multicellularity 3 , 4 ., Further evidence arises from our finding that evolution under Polycomb decoupled mutational and environmental robustness , suggesting that Polycomb can increase sensitivity to environmental conditions for the purposes of cell differentiation ., Previous work has shown that mutational robustness develops in gene-regulatory networks under conditions of stabilizing selection , and that mutational robustness and robustness to environmental changes are correlated 16 , 17 , 18 , 21 , 43 ., This correlated robustness feature is clearly incongruent with multicellular development where minimal ( though particular ) environmental cues are capable of drastically changing cellular phenotypes ., For example , regulation of only four key transcription factors is needed to change a fibroblast to a cardiomyocyte 26 ., When Polycomb functionality is added to the developmental program in the model , this facilitates the effective real-time changes to network connectivity that in turn promotes environmental sensitivity ., However , each effective network is still under stabilizing selection so mutational robustness develops ., With Polycomb the switch between these effectively distinct network architectures is initiated by changing the initial environmental conditions , making the system more responsive to environmental changes ., This real-time remodeling makes use of sub networks for multiple input\/output rather than the creation of separate modules within the network ., Indeed previous work on the same base model as we used by Borenstein and Krakauer 51 showed that only a limited number of phenotypes of the total phenotype space are possible ., It appears that the epigenetic addition to the model makes many of the obtainable phenotypes possible ., Biological evidence for decoupling these types of robustness exists in developing multicellular organisms , such as the human fetus , where slight changes in the environmental conditions ( for example , exposure to alcohol during the first weeks ) can cause severe phenotypic changes 52 , 53 , indicative of high environmental sensitivity ., At the same time , the approximately 70 point mutations acquired on average in each human generation 54 rarely produce catastrophic changes , thus demonstrating high mutational robustness ., These findings are consistent with our modeling predictions for a system developing under Polycomb control ., Epigenetic mechanisms have been suggested to evolve in numerous ways ., As with the evolution of sexual reproduction , no single explanation has become the definite single explanation for their evolution ., Similarly , multicellularity has been suggested to evolve by different means and different mechanisms ., Here we put forward an explanation that ties the evolution of multicellularity to that of epigenetic mechanisms ., Additionally , we hypothesize that the capacity to respond differently to different environmental signals , as is required during the developmental program of multicellular organisms , is only one evolutionary advantage of epigenetic processes ., Other advantages include the contribution of epigenetic mechanisms to the emergence of modularity ., It has been argued previously that network modularity contributes to robustness 55 ., As we have shown , Polycomb , in response to environmental queues , carves the network into sub-networks such that beyond the critical time only a subset of the interacting elements play a role is shaping the final gene expression pattern ., Polycomb , thus , amplifies the effect of environmental perturbation beyond genetic perturbation , and introduces modification at the architectural level ., Such change in network architecture introduces higher sensitivity to environmental changes while maintaining robustness to genetic perturbation that have no effect on network architecture ., It has been shown that under stabilizing selection , our model tends to decrease the mean number of steps to reach a stable output state 13 ., Thus , further analysis of the dependency of time to stable output on the time at which Polycomb is activated ( - in our model ) , would further elucidate the evolutionary role of epigenetic mechanisms ., Metazoan evolution is characterized by specialization of cell and tissue functionality ., During multicellular development cells become specialized in function within the organism ., This differentiation requires real-time analysis of the local environment to direct cellular development ., Our findings , although based on the functionality of Polycomb , suggest a general design principal for evolution in the creation of multicellularity , namely the real-time stratification of the gene network ., The effect of the PcG mechanism is to elegantly limit the useable genetic information for a cell based on the events during development ., By effectively removing genes from the accessible gene network the complexity of millions of potential interactions among thousands of genes is reduced ., Following Siegal and Bergman 13 , the model consists of a gene regulatory network of genes each of which has the ability to regulate the expression of any of the genes ., The topology is held in the form of a matrix , with non-zero entries , wij , representing connections within the regulatory network ( a negative value denotes an inhibitory effect ) ., The non-zero entries in the matrix are randomly assigned at the beginning of each simulation with probability ( connectivity of the network ) ., To initiate the development process a random binary ( i . e . containing either 0 or, 1 ) initial condition vector of length is selected ., Gene expression dynamics are then computed according to Equation 1 ., Once a stable founder individual is found , a population of a given size ( kept constant through the simulation ) is founded by that individual ., Evolution of the gene network is done through a standard population-genetic process ., Mutations occur via changes to the non-zero entries of the matrix with 10% chance of a single mutation per genome ., Mating is carried out by selecting two random individuals from the population and then selecting random rows from each parents matrix to create an offspring genotype ( sexual reproduction ) ., At this point selection is carried out as developmental instability ( if no equilibrium gene expression can be generated , as calculated by all real components of the eigenvalues of the Jacobian matrix being less than or equal to 0 ., The Jacobian matrix is defined as: where is the Kronecker delta ( only when , and 0 otherwise ) and through distance from an optimal phenotype ( is defined as the of the initial founder in stabilizing selection ) using the formula: ( 2 ) with Measuring the mutational robustness of our networks was done in the same manner as multiple previous studies 13 , 43 , 56 ., For each individual in the population we mutated exactly one random connection in the matrix ., We simulate gene expression dynamics until a new steady state is reached , or until , and calculate the phenotypic distance ( ) between the new resulting output vector and the original using Equation ( 2 ) above ., Identical steady-state and vectors would be considered as having absolute mutational robustness ., For sake of clarity we report mutational robustness as ., To measure our networks robustness to environmental changes we used a measure outlined in previous studies 43 ., In this measure we vary the input vector by randomly flipping two members and ( a 0\u21921 or 1\u21920 ) , reflecting the small environmental differences needed to alter cell fate in Metazoa ., Using the manipulated input vector we re-compute gene expression dynamics ., After altering the input conditions we calculate the divergence from the original in the same manner as for mutational robustness and report it in the same manner .","headings":"Introduction, Results, Discussion, Methods","abstract":"The evolution of ever increasing complex life forms has required innovations at the molecular level in order to overcome existing barriers ., For example , evolving processes for cell differentiation , such as epigenetic mechanisms , facilitated the transition to multicellularity ., At the same time , studies using gene regulatory network models , and corroborated in single-celled model organisms , have shown that mutational robustness and environmental robustness are correlated ., Such correlation may constitute a barrier to the evolution of multicellularity since cell differentiation requires sensitivity to cues in the internal environment during development ., To investigate how this barrier might be overcome , we used a gene regulatory network model which includes epigenetic control based on the mechanism of histone modification via Polycomb Group Proteins , which evolved in tandem with the transition to multicellularity ., Incorporating the Polycomb mechanism allowed decoupling of mutational and environmental robustness , thus allowing the system to be simultaneously robust to mutations while increasing sensitivity to the environment ., In turn , this decoupling facilitated cell differentiation which we tested by evaluating the capacity of the system for producing novel output states in response to altered initial conditions ., In the absence of the Polycomb mechanism , the system was frequently incapable of adding new states , whereas with the Polycomb mechanism successful addition of new states was nearly certain ., The Polycomb mechanism , which dynamically reshapes the network structure during development as a function of expression dynamics , decouples mutational and environmental robustness , thus providing a necessary step in the evolution of multicellularity .","summary":"Understanding the transition to multicellularity remains a key unanswered question in evolutionary biology ., The transition required three essential cellular features to evolve: adhesion , signaling and differentiation ., In particular , cell differentiation requires sensitivity to environmental cues to create distinct cell-specific transcription profiles ., Previous work with model organisms and gene network models showed that biological systems evolve robustness to both mutational and environmental perturbations under stabilizing selection and that furthermore , mutational and environmental robustness are correlated ., Increased robustness to environmental cues will therefore pose a barrier to the development of cell differentiation , and thus multicellularity ., Because several important epigenetic developmental mechanisms , particularly Polycomb-mediated histone modification , appear to have evolved with multicellularity , we hypothesized that such a mechanism facilitated sensitivity to the environment and therefore cell differentiation ., Using a computational model , we integrated Polycomb function with a regulatory model , revealing a clear decoupling between environmental and mutational robustness , allowing increased environmental sensitivity while mutational robustness remained intact ., We also found that Polycomb greatly facilitated the ability for a single gene network to create several distinct transcription profiles - each representing a distinct differentiated cell type ., Our work highlights the simple elegance through which the evolution of a key epigenetic mechanism can facilitate the transition to functional cell differentiation .","keywords":"systems biology, evolutionary modeling, regulatory networks, biology, computational biology, evolutionary biology, evolutionary theory","toc":null} +{"Unnamed: 0":2293,"id":"journal.pcbi.1004869","year":2016,"title":"Molecular Infectious Disease Epidemiology: Survival Analysis and Algorithms Linking Phylogenies to Transmission Trees","sections":"The earliest use of phylogenetics in infectious disease epidemiology was to confirm or rule out a suspected source of the human immunodeficiency virus ( HIV ) ., Phylogenetic analyses were used to confirm that five HIV patients were infected at a dental practice in Florida between 1987 and 1989 19 and to rule out infection of a Baltimore patient in 1985 by an HIV-positive surgeon 20 ., A more ambitious use of phylogenetics is to reconstruct a transmission tree , which is a directed graph with an edge from node i to node j if person i infected person j ., An analysis by Leitner et al . 21 , 22 of an HIV-1 transmission cluster in Sweden from the early 1980s compared reconstructed phylogenies based on HIV genetic sequences to a true phylogeny based on the known transmission tree , times of transmission , and times of sequence sampling ., The reconstructed phylogenies accurately reflected the topology of the true phylogeny , and the accuracy increased when sequences from different regions of the HIV genome were combined ., The increasing availability of whole-genome sequence data has renewed interest in combining pathogen genetic sequence data with epidemiologic data to reconstruct transmission trees ., One approach to this problem is to reconstruct the transmission tree using genetic distances ., Spada et al . 23 reconstructed the transmission tree linking five children infected with hepatitis C virus ( HCV ) by finding the spanning tree linking the HCV genetic sequences that minimized the sum of the genetic distances across its edges , excluding edges inconsistent with the epidemiologic data ., The SeqTrack algorithm of Jombart et al . 24 generalizes this approach ., It constructs a transmission tree by finding the spanning tree linking the sampled sequences that minimizes ( or maximizes ) a set of edge weights ., Snitkin et al . 25 used this algorithm to investigate a 2011 outbreak of carbapenem-resistant Klebsiella pneumoniae in the NIH Clinical Center , penalizing edges with large genetic distances , between patients who did not overlap in the same ward , or that required a long silent colonization ., Wertheim et al . 26 constructed a network among HIV patients in San Diego by linking individuals whose sequences were <1% distant ., This was used to estimate community-level effects of HIV prevention and treatment ., A second approach to transmission tree reconstruction is to weight possible infector-infectee links using a pseudolikelihood based on genetic and epidemiologic data ., Ypma et al . 27 analyzed a 2003 influenza A ( H7N7 ) outbreak among poultry farms in the Netherlands by combining data on the times of infection and culling at each farm , the distances between the farms , and RNA consensus sequences ., The weight of each possible transmission link was the product of components based on temporal , geographic , and genetic data ., The weight of a complete transmission tree was the product of the edge weights ., The R package outbreaker implements an extension of this approach that allows multiple introductions of infection and unobserved cases 28 ., Like the spanning tree methods above , these methods model pathogen evolution as a process that occurs at the moment of transmission ., Morelli et al . 29 proposed a variation of these methods that allows within-host pathogen evolution by incorporating the times of infection and observation into the likelihood component for the genetic sequence data ., A third approach is to reconstruct the transmission tree by combining a phylogeny with epidemiologic data , which was first done by Cottam et al . 30 , 31 in an investigation of a 2001 foot-and-mouth disease virus ( FMDV ) outbreak among farms in the United Kingdom ( UK ) ., The phylogeny and the transmission tree were linked by considering possible locations of the most recent common ancestors ( MRCAs ) of viruses sampled from the farms ., The probability pij that farm i infected farm j was calculated using epidemiologic data on the oldest detected FMDV lesion and the dates of sampling and culling on each farm ., The weight of each possible transmission network was proportional to the product of the pij for all edges i \u2192 j ., Similar methods were used to track farm-to-farm spread of a 2007 FMDV outbreak 32 ., Gardy et al . 33 combined social network analysis with a phylogeny based on whole-genome sequences to construct a transmission tree for a tuberculosis outbreak in British Columbia ., Didelot et al . used the time of the most recent common ancestor ( TMRCA ) to identify possible person-to-person transmission events in studies of Clostridium difficile transmission in the UK 34 and Helicobacter pylori transmission in South Africa 35 ., In a study of Mycobacterium tuberculosis transmission in the Netherlands , Bryant et al . 36 ruled out transmission between individuals whose samples did not share a parent in the phylogeny ., Recent research has identified problems with using genetic sequence data to reconstruct transmission trees ., Simulations by Worby et al . 37 , 38 found that pairwise genetic distances cannot reliably identify sources of infection ., Methods based on phylogenies often underestimate the complexity of the relationship between the phylogenetic and transmission trees ., Branching events in a phylogeny do not necessarily correspond to transmissions , and the topology of the phylogenetic tree need not be the same as the topology of the transmission tree 39\u201341 ., These differences are especially important for diseases with significant within-host pathogen diversity and long latent or infectious periods 41 , 42 ., Ypma et al . 40 and Didelot et al . 42 have developed Bayesian methods that enforce consistency between phylogenetic and transmission trees in Markov chain Monte Carlo ( MCMC ) iterations ., More recently , Lau et al . 43 have outlined a Bayesian integration of epidemiologic and genetic sequence data that uses likelihoods based on survival analysis , but their approach does not use pathogen phylogenies directly , assuming that a single dominant lineage within each host can be transmitted ., Here , we build a systematic understanding of the relationship between pathogen phylogenies and transmission trees under much weaker assumptions about within-host evolution , allowing the incorporation of genetic sequence data into frequentist and Bayesian survival analysis of infectious disease transmission data ., Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission ., The transmission tree from one outbreak does not generalize to future outbreaks , but a phylogeny provides partial information about who-infected-whom ., Survival analysis provides a rigorous but flexible statistical framework for infectious disease transmission data that explicitly links parameter estimation to the set of possible transmission trees 44\u201346 ., In this framework , estimates of transmission parameters such as covariate effects on infectiousness and susceptibility and evolution of infectiousness over time in infectious individuals are based on sums or averages over all possible transmission trees ., Since a phylogeny linking pathogen samples from infected individuals constrains the set of possible transmission trees , pathogen genetic sequence data can be combined with epidemiologic data to obtain more efficient estimates of transmission parameters ., At any time , each individual i \u2208 {1 , \u2026 , n} is in one of four states: susceptible ( S ) , exposed ( E ) , infectious ( I ) , or removed ( R ) ., Person i moves from S to E at his or her infection time ti , with ti = \u221e if i is never infected ., After infection , i has a latent period of length \u03b5i during which he or she is infected but not infectious ., At time ti + \u03b5i , i moves from E to I , beginning an infectious period of length \u03b9i ., At time ti + \u03b5i + \u03b9i , i moves from I to R , where he or she can no longer infect others or be infected ., The latent period \u03b5i is a nonnegative random variable , the infectious period \u03b9i is a strictly positive random variable , and both have finite mean and variance ., If person i is infected , the time elapsed since the onset of infectiousness at time ti + \u03b5i is the infectious age of, i . After becoming infectious at time ti + \u03b5i , person i makes infectious contact with j \u2260 i at time t i j = t i + \u03b5 i + \u03c4 i j * ., We define infectious contact to be sufficient to cause infection in a susceptible person , so tj \u2264 tij ., The infectious contact interval \u03c4 i j * is a strictly positive random variable with \u03c4 i j * = \u221e if infectious contact never occurs ., Since infectious contact must occur while i is infectious or never , \u03c4 i j * \u2208 ( 0 , \u03b9 i or \u03c4 i j * = \u221e ., For each ordered pair ij , let Cij = 1 if infectious contact from i to j is possible and Cij = 0 otherwise ., For example , the Cij could be the entries in the adjacency matrix for a contact network ., However , we do not require that Cij = Cji ., We assume the infectious contact interval \u03c4 i j * is generated in the following way: A contact interval \u03c4ij is drawn from a distribution with hazard function hij ( \u03c4 ) ., If \u03c4ij \u2264 \u03b9i and Cij = 1 , then \u03c4 i j * = \u03c4 i, j . Otherwise , \u03c4 i j * = \u221e ., Survival analysis of infectious disease transmission data can be viewed as a generalization of discrete-time chain binomial models 47 to continuous time , and it includes parametric methods 44 , nonparametric methods 45 , and semiparametric relative-risk regression models 46 ., For simplicity , we use parametric methods and assume that exogenous infections are known ., Let the hazard of infectious contact from i to j at time \u03c4 after the onset of infectiousness in i be, h i j ( \u03c4 ) = exp \u03b2 0 \u22a4 X i j ( \u03c4 ) h 0 ( \u03c4 ) , ( 1 ), where \u03b20 is an unknown coefficient vector , Xij ( \u03c4 ) is a covariate vector , and h0 ( \u03c4 ) is a baseline hazard function ., The vector Xij ( \u03c4 ) can include individual-level covariates affecting the infectiousness of i or the susceptibility of j as well as pairwise covariates ( e . g . , membership in the same household ) ., The coefficient vector \u03b20 captures covariate effects on the hazard of transmission , and the baseline hazard function h0 ( \u03c4 ) captures the evolution of infectiousness over time in infectious individuals ., We assume that \u03c4ij can be observed only if j is infected by i at time ti + \u03b5i + \u03c4ij ., The contact interval \u03c4ij will be unobserved if i recovers from infectiousness before making infectious contact with j , if j is infected by a someone other than i , or if observation of j has stopped ., Let Ii ( \u03c4 ) = 1\u03c4 \u2208 ( 0 , \u03b9i be a left-continuous process indicating whether i remains infectious at infectious age \u03c4 ., Let Sij ( \u03c4 ) = 1ti+\u03b5i+\u03c4 \u2264 tj be a left-continuous process indicating whether j remains susceptible when i reaches infectious age \u03c4 ., Assume that the population is under observation until a stopping time T and let Oij ( \u03c4 ) = 1ti+\u03b5i+\u03c4 \u2264 T be a left-continuous process indicating whether j is under observation when i reaches infectious age \u03c4 ., Then, Y i j ( \u03c4 ) = C i j I i ( \u03c4 ) S i j ( \u03c4 ) O i j ( \u03c4 ) ( 2 ), is a left-continuous process indicating whether infectious contact from i to j can be observed at infectious age \u03c4 of, i . The assumptions above ensure that censoring of \u03c4ij is independent for all ij , and they can be relaxed if independent censoring is preserved ., Let \u03b8 be a parameter vector for a family of hazard functions h ( \u03c4 , \u03b8 ) such that h0 ( \u03c4 ) = h ( \u03c4 , \u03b80 ) for an unknown \u03b80 ., To allow maximum likelihood estimation , we assume that h ( \u03c4 , \u03b8 ) has continuous second derivatives with respect to \u03b8 ., Let, h i j ( \u03c4 , \u03b2 , \u03b8 ) = exp \u03b2 \u22a4 X i j ( \u03c4 ) h ( \u03c4 , \u03b8 ) ., ( 3 ) Let W j = { i : t i + \u03b5 i < t j and C i j = 1 } denote the set of all infectious individuals to whom j was exposed while susceptible , which we call the exposure set of, j . When we observe who-infected-whom ( i . e . , v is known ) , the likelihood is, L v ( \u03b2 , \u03b8 ) = \u220f j = 1 n h v j j ( t j - t v j - \u03b5 v j , \u03b2 , \u03b8 ) 1 v j \u2209 { 0 , \u221e } \u220f i \u2208 W j e - \u222b 0 \u03b9 i h i j ( \u03c4 , \u03b2 , \u03b8 ) Y i j ( \u03c4 ) d \u03c4 ., ( 4 ) The hazard terms depend on v , but the survival terms do not 44 ., When we do not observe who-infected-whom , the likelihood is a sum over all possible transmission trees: L ( \u03b2 , \u03b8 ) = \u2211 v \u2208 V L v ( \u03b2 , \u03b8 ) 44 ., Each v \u2208 V can be generated by choosing a v j \u2208 V j for each endogenous infection, j . Given the epidemiologic data , each vj can be chosen independently 48 ., This leads to the sum-product factorization, L ( \u03b2 , \u03b8 ) = \u220f j = 1 n \u2211 i \u2208 V j h i j ( t j - t i - \u03b5 i , \u03b2 , \u03b8 ) 1 v j \u2209 { 0 , \u221e } \u220f i \u2208 W j e - \u222b 0 \u03b9 i h i j ( \u03c4 , \u03b2 , \u03b8 ) Y i j ( \u03c4 ) d \u03c4 ., ( 5 ) The probability of a particular transmission tree v is, Pr ( v | \u03b2 , \u03b8 ) = L v ( \u03b2 , \u03b8 ) L ( \u03b2 , \u03b8 ) = \u220f j : v j \u2209 { 0 , \u221e } h v j j ( t j - t v j - \u03b5 v j , \u03b2 , \u03b8 ) \u2211 i \u2208 V j h i j ( t j - t i - \u03b5 i , \u03b2 , \u03b8 ) , ( 6 ), and Lv ( \u03b2 , \u03b8 ) = Pr ( v|\u03b2 , \u03b8 ) L ( \u03b2 , \u03b8 ) ., In this framework , estimation of ( \u03b2 , \u03b8 ) , and the probabilities of possible transmission trees is simultaneous ., An interesting special case is when hij ( \u03c4 , \u03b2 , \u03b8 ) = \u03bb for all ij ., Then Pr ( v|\u03b2 , \u03b8 ) does not depend on \u03bb , so the transmission tree is an ancillary statistic 44 ., The relationship between phylogenies and transmission trees we develop here is similar to the approach taken by Cottam et al . 31 who linked phylogenetic and transmission trees via the locations of common ancestors ., It is logically equivalent to the approaches of Ypma et al . 40 who joined the within-host phylogenies of infectors and infectees into a single phylogeny , Didelot et al . 42 who colored lineages in the phylogeny with a unique color for each individual , and Hall and Rambaut 49 who represented transmission trees as partitions of phylogenies ., We begin with these assumptions: The first two assumptions concern the biology of disease ., The last three assumptions concern the resolution of the epidemiologic data , which can be controlled through study design ., Initially , we use only the topology of the pathogen phylogeny to infer the set of possible transmission trees ., Later , we consider how branching times at interior nodes further restrict the set of possible transmission trees ., To study the impact of a phylogeny on the efficiency of transmission parameter estimates , we conducted a series of 1 , 000 simulations ., In each simulation , there were 100 independent households of size 6 ., Each household had an index case with an infection time chosen from an exponential distribution with mean one ., Each individual i had a binary covariate Xi that could affect infectiousness and susceptibility ., Given a parameter vector \u03b2 = ( \u03b2inf , \u03b2sus ) , the hazard of infectious contact from i to j at infectious age \u03c4 of i is, h i j ( \u03c4 , \u03b2 ) = exp ( \u03b2 inf X i + \u03b2 sus X j ) \u03bb 0 ., ( 16 ), In each simulation , \u03b2inf and \u03b2sus were independently chosen from a uniform distribution on ( \u22121 , 1 ) ., In all simulations , the baseline hazard was \u03bb0 = 1 and the infectious periods were independent exponential random variables with mean one ., In each simulation , we analyzed data from the first 200 infections in three ways: using only epidemiologic data via the likelihood in Eq ( 5 ) , using epidemiologic data with who-infected-whom via the likelihood in Eq ( 4 ) , and using epidemiologic data with a phylogeny via the likelihood in Eq ( 7 ) ., In the phylogenetic analysis , we assumed a single pathogen sample from each infected individual ., The within-host phylogeny for each individual who infected m > 0 individuals was chosen uniformly at random from all rooted , bifurcating phylogenies with m + 1 tips ., Within-individual phylogenies were chosen independently and combined into a single phylogeny as in Ypma et al . 40 ., Thus , the conditional probability Pr ( \u03a6|v , Epi ) for a phylogeny \u03a6 given a transmission tree v in which each individual j infected mj \u2265 0 other individuals was proportional to, \u220f j : m j > 0 2 m j - 1 ( m j - 1 ) !, ( 2 m j - 1 ) !, ., ( 17 ), The set of transmission trees consistent with the phylogeny was determined using Algorithms 1 and 2 ., We calculated the mean error , mean squared error , 95% confidence interval coverage probability , and relative efficiency of \u03b2inf , \u03b2sus , and ln \u03bb0 estimates in all three analyses ., The simulations were conducted in Python 2 . 7 ( www . python . org ) and analysis was conducted in R 3 . 2 ( cran . r-project . org ) via RPy2 2 . 7 ( rpy . sourceforge . net ) ., The Python code is in S1 Text ., Parameters , point estimates , and 95% confidence limits are in S1 Data ., R code for the simulation data analysis is in S1 Text ., To illustrate an application of these algorithms and likelihoods , we use them to analyze farm-to-farm transmission trees of foot and mouth disease virus ( FMDV ) in a cluster of 12 epidemiologically linked farms in Durham , UK in 2001 ., The genetic and epidemiologic data are publicly available as Data S3 and Data S4 in Morelli et al . 29 ., These data were previously analyzed by Cottam et al . 31 , 32 , Morelli et al . 29 , Ypma et al . 40 , and Lau et al . 43 ., FMDV is a picornavirus that causes a highly contagious disease in cattle , pigs , sheep , and goats 51 ., Upon infection , there is an incubation period of approximately 1\u201312 days in sheep , 2\u201314 days in cattle , and two or more days in pigs ., The incubation period is followed by an acute febrile illness with painful blisters on the feet , the mouth , and the mammary glands ., It is transmitted through secretions from infected animals , fomites , virus carried on skin or clothing , and aerosolized virus ., Outbreaks of foot-and-mouth disease are difficult to control and can devastate livestock ., During the FMDV outbreak , teams from the UK Department for Environment , Food , and Rural Affairs ( DEFRA ) visited each infected farm 30 , 31 ., They recorded the number and types of susceptible and infected animals , examined infected animals to determine the age of the oldest lesions , and collected epithelial samples ., Finally , they recorded the date of the cull ., We assume that infectiousness begins on the day that the first lesions appeared and ends with the cull , and we assumed a latent period ( between infection and the onset of infectiousness ) of 2\u201316 days ., Fig 2 shows the relative locations of the farms , and Fig 3 shows the timeline of the latent and infectious periods ., Analysis was conducted in R 3 . 2 ( cran . r-project . org ) , and the code is available in S3 Text ., Table 1 shows the mean error , mean squared error , 95% confidence interval coverage probability , and relative efficiency of \u03b2inf , \u03b2sus , and ln \u03bb0 estimators in the simulations ., In all cases , the point estimates were nearly unbiased ( indicated by the mean error squared being much smaller than the mean squared error ) and the 95% confidence interval coverage probabilities were near 0 . 95 ., Fig 4 shows that estimates of \u03b2inf using a phylogeny were more efficient than estimates using epidemiologic data only and less efficient than estimates using who-infected-whom ., By mean squared error , the phylogenetic estimates had a relative efficiency of 1 . 39 compared to estimates using only epidemiologic data and 0 . 80 compared to estimates using who-infected-whom ., Because knowledge of who-infected-whom does not add to our knowledge of who was infected , all three analyses were equally efficient for \u03b2sus ( similar results were obtained for estimates with and without who-infected-whom in Ref 46 ) ., Fig 5 shows that estimates of ln \u03bb0 using a phylogeny were more efficient than those using epidemiologic data only and less efficient than those using who-infected whom ., By mean squared error , the phylogenetic estimates had a relative efficiency of 1 . 17 compared to estimates using only epidemiologic data and 0 . 90 compared to estimates using who-infected-whom ., Table 2 shows the mean error , mean squared error , 95% confidence interval coverage probability , and relative efficiency of \u03b2inf , \u03b2sus , and ln \u03bb0 estimators that excluded data on uninfected household members ., The mean squared errors were much higher than the corresponding estimators in Table 1 , so their relative efficiency was very low ., In all cases , the efficiency loss from excluding data on individuals who escape infection was much larger than the efficiency gain from incorporating a phylogeny or from knowing exactly who infected whom ., For estimators of \u03b2inf and \u03b2sus , the square of the mean error was much smaller than the mean squared error , indicating little bias ., Estimates of ln \u03bb0 were biased upward , resulting in extremely low relative efficiencies and coverage probabilities ., In 44 , similar results were seen for estimates of the basic reproduction number ( R0 ) when approximate likelihoods for mass-action models , which do not require data on uninfected individuals , were used to analyze data from network-based epidemics ., Table 3 shows results the mean error , mean squared error , 95% confidence interval coverage probability , and relative efficiency of \u03b2inf , \u03b2sus , ln \u03bb0 , and ln \u03b3 estimators from models with Weibull contact interval distributions with rate parameter \u03bb0 = 2 and shape parameter \u03b3 = . 5 ., All estimators are unbiased with 95% confidence interval coverage probabilities near 0 . 95 ., The relative efficiencies are similar to those in Table 1 , showing that the gains in efficiency for estimates of infectiousness hazard ratios and baseline hazards occur under weak assumptions about the baseline hazard ., With no phylogeny , there are 19 , 440 possible transmission trees linking the 12 farms in the Durham cluster ., A phylogeny was constructed in SeaView 52 using consensus RNA sequences from 15 farms , including three farms not epidemiologically linked to the cluster 29 ., We used a generalized time reversible ( GTR ) nucleotide substitution model with four rate classes on 8 , 196 sites ., Fig 6 shows the rooted phylogeny for the 12 farms in the cluster with branch tips scaled to reflect the time of infectiousness onset at each farm ( interior branch lengths do not indicate branching times ) ., The order of infectiousness onsets is known , so first ( x ) is the host with the earliest onset of infectiousness in clade Cx ., Fig 7 shows the postorder host set Dx for each node x in the phylogeny , and Fig 8 shows the host sets ., The host is uniquely determined by the phylogeny for all interior nodes except three ., Fig 9 shows the six possible interior node host assignments and the corresponding transmission trees ., The simulations suggest that a phylogeny can recover much of the information that would be obtained by observing who-infected-whom ., Incorporating a phylogeny generated more precise estimates of \u03b2inf and ln \u03bb0 ., This increase in efficiency remained when infectiousness varied over the course of the infectious period , as in the Weibull models ., The simulations used only phylogenetic topologies and assumed that all within-host topologies were equally likely , limiting the ability of the phylogeny to constrain the set of possible transmission trees ., Using branching times and more realistic models of within-host pathogen evolution would allow greater information about who-infected-whom to be extracted from a phylogeny ., The simulations that excluded data on household members who escape infection showed that this information is critical to estimating \u03b2inf , \u03b2sus , and ln \u03bb0 accurately ., These individuals do not appear anywhere on the pathogen phylogeny , so this point has escaped the attention of many researchers working on incorporating phylogenetics into the analysis of infectious disease transmission data ., Any analysis that excludes this data should have an explicit justification based on a complete-data model\u2014for example , the initial spread of a mass-action epidemic can be analyzed without data on escapees 44 ., In general , epidemiologic studies of emerging infections should be designed to capture information on individuals who were exposed to infection but not infected , which might justify greater emphasis on detailed studies of households , schools , or other settings with rapid transmission and a clearly defined population at risk ., The data analysis showed that the increased precision found in the simulations can be obtained in practice ., The incorporation of phylogenies allowed more precise estimates of the hazard of FMDV transmission from infected to susceptible farms ., For simplicity , our analysis assumed that the times of infectiousness onset were accurately estimated ., A data-augmented MCMC 53 could be used to account for uncertainty in the onset of infectiousness , showing the importance of extending our methods to account for missing data ., A more important limitation of this analysis was the lack of data on uninfected farms ., The hazard function estimates were highly sensitive to the number of uninfected farms in the area where the cluster occurred ., These data often go uncollected in outbreaks because their importance is unrecognized ., This insight has important implications for the theory and practice of molecular infectious disease epidemiology .","headings":"Introduction, Methods, Results, Discussion","abstract":"Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks ., However , transmission trees from one outbreak do not generalize to future outbreaks ., Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission ., In a survival analysis framework , estimation of transmission parameters is based on sums or averages over the possible transmission trees ., A phylogeny can increase the precision of these estimates by providing partial information about who infected whom ., The leaves of the phylogeny represent sampled pathogens , which have known hosts ., The interior nodes represent common ancestors of sampled pathogens , which have unknown hosts ., Starting from assumptions about disease biology and epidemiologic study design , we prove that there is a one-to-one correspondence between the possible assignments of interior node hosts and the transmission trees simultaneously consistent with the phylogeny and the epidemiologic data on person , place , and time ., We develop algorithms to enumerate these transmission trees and show these can be used to calculate likelihoods that incorporate both epidemiologic data and a phylogeny ., A simulation study confirms that this leads to more efficient estimates of hazard ratios for infectiousness and baseline hazards of infectious contact , and we use these methods to analyze data from a foot-and-mouth disease virus outbreak in the United Kingdom in 2001 ., These results demonstrate the importance of data on individuals who escape infection , which is often overlooked ., The combination of survival analysis and algorithms linking phylogenies to transmission trees is a rigorous but flexible statistical foundation for molecular infectious disease epidemiology .","summary":"Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks ., However , transmission trees from one outbreak do not generalize to future outbreaks ., Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission ., Accurate estimates of transmission parameters can help identify risk factors for transmission and aid the design and evaluation of public health interventions for emerging infections ., Using statistical methods for time-to-event data ( survival analysis ) , estimation of transmission parameters is based on sums or averages over the possible transmission trees ., By providing partial information about who infected whom , a pathogen phylogeny can reduce the set of possible transmission trees and increase the precision of transmission parameter estimates ., We derive algorithms that enumerate the transmission trees consistent with a pathogen phylogeny and epidemiologic data , show how to calculate likelihoods for transmission data with a phylogeny , and apply these methods to a foot and mouth disease outbreak in the United Kingdom in 2001 ., These methods will allow pathogen genetic sequences to be incorporated into the analysis of outbreak investigations , vaccine trials , and other studies of infectious disease transmission .","keywords":"taxonomy, plant anatomy, medicine and health sciences, pathology and laboratory medicine, infectious disease epidemiology, pathogens, spatial epidemiology, phylogenetics, data management, farms, plant science, phylogenetic analysis, molecular biology techniques, genetic epidemiology, research and analysis methods, infectious diseases, computer and information sciences, epidemiology, leaves, evolutionary systematics, molecular biology, agriculture, molecular biology assays and analysis techniques, biology and life sciences, evolutionary biology","toc":null} +{"Unnamed: 0":236,"id":"journal.pcbi.0030154","year":2007,"title":"PERIOD\u2013TIMELESS Interval Timer May Require an Additional Feedback Loop","sections":"Circadian rhythmicity is the product of a robust 1 , free-running , temperature-compensated 2 , and adaptable 3 , 4 biological clock found in diverse organisms ranging from bacteria to humans ., The model organism Drosophila is commonly used to study this phenomenon due to the relative ease of experimentation and the similarities to the mammalian circadian clock ( reviewed in 5 , 6 ) ., The Drosophila circadian clock is composed of two interlocking feedback loops , shown in Figure 1 ., The first loop is composed of the negative feedback of period ( per ) and timeless ( tim ) , shown in red , which down-regulate their own expression by inhibiting the CLOCK\u2013CYCLE ( CLK\u2013CYC ) transcription factor ., DOUBLE-TIME ( DBT ) binds to and phosphorylates PER , which dimerizes with TIM before localizing to the nucleus via an uncharacterized mechanism ., Circadian rhythms are entrained by light through an increased degradation of TIM protein , shown in yellow ., In the second loop , shown in blue , the expression of clk is regulated by vrille ( vri ) and PAR domain protein 1 isoform \u025b ( pdp ) ., Both vri and pdp expression are activated by CLK\u2013CYC ., VRI represses the expression of clk , creating a negative feedback loop , whereas PDP creates a positive feedback loop through activating clk expression ., Incorporating detail on interlocked feedback loops , recently shown to increase the stability and robustness of oscillations 7 , 8 , may be important to accurately capture the network behavior ., Several mathematical models have been created to better characterize the network underlying circadian rhythmicity in Drosophila ( e . g . , 9\u201314 ) ., These initial studies provided important insights into the molecular mechanisms of circadian rhythms and the ability to produce robust 24-hour oscillations ., However , recent experimental observations have created a more detailed view of network interactions , including new critical aspects that are not described by previous models ., It is thus necessary to establish whether a mathematical model of the current consensus network would provide robust oscillations ., The nuclear localization of PER and TIM and the subsequent repression of CLK activity have been two particularly active areas of experimental research ., The necessity of TIM for PER nuclear localization has been long established 15 and was assumed to occur through the nuclear transport of PER\u2013TIM dimers ., In contrast to this mechanism , recent experimental observations now suggest that PER and TIM localize to the nucleus in a primarily independent mechanism 16\u201321 ., Additionally , while TIM is required for PER nuclear localization via a cytoplasmic \u201cinterval timer , \u201d the mechanism controlling this timer is independent of both TIM and PER concentration 21 ., Thus , the way in which TIM affects PER nuclear localization is an open question ., Once in the nucleus , PER ( and to a much lesser extent TIM ) repress CLK activity , recently observed to occur via PER-mediated phosphorylation of CLK by DBT 22 , 23 ., These studies also provided evidence that total levels of CLK remain nearly constant 22 , 23 , in contrast to previously observed CLK oscillations 24\u201326 ., It remains unclear whether constant total CLK concentration can coincide with stable oscillations in this new network ., To address these questions , we first study the possible mechanisms underlying the PER\/TIM cytoplasmic interaction in Drosophila S2 cell culture using mathematical models of this isolated ( arrhythmic ) network ., Using the most likely candidate mechanism , one based on positive feedback , we created a detailed mathematical model of the wild-type Drosophila circadian network ., This model incorporates post-translational modifications to the PER and CLK proteins in addition to including both interlocked feedback loops , without the use of explicit time delays ., The results of this model are consistent with wild-type and mutant experimental observations , provide insight into recent network revisions , and suggest possible experimental directions to explore ., To investigate the six-hour delay created by the cytoplasmic interval timer observed in S2 cell culture by Meyer et al . 21 , the dynamics of the per\/tim loop were isolated and studied independently of the remaining circadian gene network to mimic the environment within Drosophila S2 cells ., The interactions constituting the three mathematical models studied are shown in Figure 2 ., All models of the isolated per\/tim loop include PER\u2013TIM dimers in the cytoplasm that dissociate immediately prior to nuclear localization and re-association , but differ in the mechanism controlling the timing of this dissociation ., The mass action model is the simplest isolated model and is based on the commonly accepted per\/tim interactions shown in Figure 2A ., In this model , PER\u2013TIM dimers simply dissociate prior to independent nuclear transport ., The dynamics of this model , shown as dotted lines in Figure 3A , were able to produce nuclear localization of PER six hours after inducing expression , but did not accurately capture the switch-like dissociation of PER\u2013TIM observed experimentally 21 ., Next , a model based on the sequential modification of PER\u2013TIM dimers , termed the serial model , was created as shown in Figure 2B ., The serial mechanism may represent the sequential phosphorylation of PER and\/or TIM ., To simplify the mathematics of this model ( see Materials and Methods ) , PER\u2013TIM dimers were assumed to be initially associated before the proceeding series of modifications after which nuclear localization occurs ., Interestingly , this model required hundreds of intermediates to produce a stable five-hour association followed by a precipitous dissociation , as shown in Figure 3B ., Finally , a model based on positive feedback ( previously suggested to increase clock accuracy via the PDP loop of the full circadian network 27 , 28 ) was created as shown in Figure 2C ., Consistent with experimental observations 21 , this model explicitly represents the cytoplasmic association of PER\u2013TIM dimers and the subsequent localization of these dimers into discrete foci ., Within the foci , a background level of activity creates a low amount of dissociation and PER nuclear localization ., A nuclear-generated signaling molecule ( SM ) is then created in response to PER and is used to complete the positive feedback on the dissociation of PER\u2013TIM in foci ., This network is conceptually consistent with the observation that blocking nuclear export ( and thus preventing the SM in this model from exiting the nucleus and exerting the positive feedback ) delays nuclear localization 21 ., The timing of PER nuclear localization in this model , shown as solid lines in Figure 3A , is consistent with experimental observations 21 ., In addition to the feedback SM , this model incorporates another unknown component: the focus-binding mediator ( FBM ) molecule ., The presence of this molecule at limiting concentrations creates a nuclear localization timer that is largely independent of the maximum PER and TIM concentration , as shown in Figure 4A ., A model of the full circadian network was created based on a simplification of the positive feedback isolated per\/tim loop model , the interactions of which are shown in Figures 1 and 5 ., The expression of per , tim , clk , vri , and pdp mRNA and total protein are in excellent agreement with experimental observations , as shown in Figure 6 ( see references therein ) ., The model results show a period of 24 . 0 hours under a light\u2013dark cycle ( Figure, 6 ) and 23 . 8 hours in constant darkness , also consistent with experimental observations ., Our results show a per dosage dependence of the period length that is consistent with experimental observations 29 , 30 ., A continuation analysis of the maximum transcriptional activation of per in the model demonstrates an inverse relation between per dosage and the period of circadian oscillation ( black lines and points in Figure 7 ) ., In contrast , a continuation analysis of the maximum transcriptional activation of tim ( gray lines and points in Figure, 7 ) revealed a profile which is similar to per dosage and thus not very consistent with experimental observations 17 , 31 ., The results from the model are consistent with numerous homozygous mutant phenotypes , as shown in Table 1 ( see references therein ) ., These results show that arrhythmic null mutants in the per\/tim feedback loop ( i . e . , per01 and tim01 ) are unable to repress the activity of CLK\u2013CYC resulting in constitutively high expression of unaltered per 32 , 33 , tim 24 , 26 , 33\u201335 , vri 36 , 37 , and pdp 36 ., The decreased PER degradation in dbtP and dbtar mutants resulted in the stable repression of CLK\u2013CYC activity and the constitutively low expression of per , tim , vri , and pdp mRNA and protein 34 , 38 ., Similarly , when the level of active CLK\u2013CYC is reduced by a knockout of CLK or CYC ( clkJrk and cyc0 ) or eliminating the activator of clk expression ( pdpP205 ) , the resulting levels of per , tim , vri , and pdp mRNA and protein are constitutively low 36 , 37 , 39\u201341 ., Understanding the effects of these mutants provides key insights into the roles of specific genes in the network , and reproducing their behavior provides support for the model representation ., The model accurately captures a majority of the published experimental observations ., However , a number of mutant flies display behavior that is not completely consistent with the model results ., For example , the low levels of tim mRNA in per01 , per mRNA in tim01 , and tim mRNA in tim01 from some publications 32 , 39 conflict with model results; however , experimental results from other publications on these same species do agree with our model results 32 , 33 , 37 , 39 ., The low levels of per mRNA in per01 32 , 33 , 39 , 42 , low levels of PER in tim01 17 , 20 , 24 , 26 , 34 , and high levels of per mRNA and protein in dbtP\/dbtar 34 , 38 observed experimentally conflict with the model results and experimental observations of other E-box mRNA and protein levels ., The mathematical model lacks ability to produce nuclear CLK\u2013CYC in clkJrk and cyc0 mutants , breaking the activation of E-box genes and producing no clk mRNA in contrast to experimental observations 27 ., Also , the non\u2013PER-mediated CLK phosphorylation in the model results are not able to produce low CLK levels 24 , 26 without nuclear PER in the per01 and tim01 mutants ., The isolated mass action model results ( dotted lines in Figure 3A ) are not consistent with the experimental observation of stably associated PER\u2013TIM dimers and precipitous nuclear localization 21 ., The serial model results ( Figure 3B ) show that hundreds of intermediates may be required to produce this behavior ., This number of intermediates is larger than the potential phosphorylation sites on PER and TIM predicted by ScanProsite ( 22 Casein Kinase II sites on PER , 32 sites on TIM ) 43 ., The progressive phosphorylation of PER and\/or TIM may be observed as a change in electrophoretic mobility prior to nuclear localization in S2 cells ., The positive feedback mechanism ( solid lines in Figure 3A ) is able to produce the correct delay and rapid dissociation , making it an attractive alternative to the serial model ., The FBM in the positive feedback model , for which no direct experimental evidence currently exists , is responsible for controlling the onset of nuclear localization independent of PER and TIM concentrations ., Without this molecule , the onset would be well correlated to experiment-to-experiment variability in the limiting concentration of PER and\/or TIM ( unpublished data ) ., The shaggy ( sgg ) kinase is a potential candidate because it has been previously shown to phosphorylate TIM , affecting the nuclear localization of PER 20 , 44 , and also bind to cytoskeletal elements 45 , a possible location of the cytoplasmic foci ., A sgg knockout in S2 cells could be used to observe the possible disruption of PER\/TIM accumulation in cytoplasmic foci , which would be consistent with this hypothesized role for sgg ., No obvious candidate for the SM exists in the literature ., Because small molecules have been shown to cause significant structural changes in PAS domains 46 , one possibility may be a small molecule binding to and elucidating a temporary conformational change in PER , allowing it to dissociate from TIM and localize to the nucleus ., The feedback model is not consistent with all the data presented by Meyer et al . 21 ., The rates of nuclear localization of PER and TIM are not completely independent ( unpublished data ) , and the conflict between rapid nuclear transport and well-controlled timing of nuclear localization results in a timing error that is double the observed seventy minutes 21 ., These differences may be the result of additional regulatory structures not already identified ., The full network results demonstrate that while total CLK levels do not change significantly during the course of a day , the oscillating phosphorylation of CLK can lead to significant and stable oscillations in mRNA ( see Figure 6 ) ., These near-constant total CLK levels are generated by synchronized translation and degradation ( see Figure S1 ) ., This result differs from previous mathematical models 11 , 12 , 14 which show a significant oscillation in CLK level ( consistent with prior experimental observations 24\u201326 ) , and suggest that the oscillation of CLK activity , not concentration , is necessary for circadian rhythmicity ., We find that independent transfer of PER and TIM by simple mass action kinetics is inconsistent with experimental observations , but that an additional feedback loop ( or alternatively a large number of intermediate phosphorylated states ) is able to produce the switch-like dissociation of cytoplasmic PER\u2013TIM underlying the interval timer 21 ., This positive feedback was introduced into a mechanistic mathematical framework for Drosophila circadian rhythms which demonstrates excellent agreement with experimentally observed expression profiles of circadian genes and many circadian mutants ., The framework is consistent with observations of the relationship between per dosage and circadian periodicity ., Post-translational regulation is addressed , including the effect of phosphorylation on the transcriptional activation activity of CLK ., Our results also show that the nuclear translocation of the PER and TIM can occur independently while producing stable oscillations when positive feedback is employed ., The simple mass action model of the isolated per\/tim loop is represented by Equations 1\u20137 below ., The serial model is represented by Equations 8\u201313 below , where N is the number of intermediates in the reaction series ., To simplify the solution of the serial model , initial concentrations of monomeric PER and TIM in the cytoplasm were eliminated by assuming that their dimerization occurred quickly ., This assumption allowed the analytic solution of the last PER\u2013TIM dimer in the series of N reactions , and greatly reduced the number of equations for large N . The positive feedback model is represented by Equations 14\u201323 below ., To represent the concentration effect of foci localization , a second-order term is used for slow dissociation of PER\u2013TIM from the foci ( see Equations 15 , 17 , and 18 ) ., Additionally , SM is assumed to catalyze the release of PER\u2013TIM from the foci , and thus is not depleted by this reaction ., The initial concentrations of cytoplasmic PER and TIM in the mass action and positive feedback models and the first PER\u2013TIM dimer in the serial model were set to the maximum concentration of PER and TIM ( 10 , 000 molecules or approximately 104 nM ) ., The initial concentration of FBM in the positive feedback model was set to 5 , 000 molecules ., All other initial concentrations in the isolated models were set to zero ., Additionally , the initial conditions of PER and TIM in the positive feedback model were varied in magnitude based on a log normal distribution fit to the data presented in Figure 1C of 21 ., See Table S1 for a full list of reaction rate constants for the isolated models ., A detailed mathematical framework of Drosophila circadian rhythms using the differential equations below was created based on the interactions shown in Figures 1 and 5 ., This description does not use time delays and explicitly represents the post-translational modifications of PER and CLK ., Illumination in light\u2013dark cycles is modeled via Light , defined as a square wave in Equation 47 ., Light acts upon the degradation of cytoplasmic and nuclear TIM ., The transcriptional activation kinetics are borrowed from 47 , and described in Text S1 ., FBM is not explicitly represented because the inclusion of this molecule at limiting concentrations did not significantly alter the presented results ( unpublished data ) ., As shown in Figure 1 ( and based on the observations of 22 , 23 ) , the presence of nuclear PER and PER\u2013TIM dimers causes the phosphorylation of CLK ., Once phosphorylated , CLK cannot bind to DNA and is either degraded or exported into the cytoplasm where it can be degraded or dephosphorylated ., Chemical reaction rate constants are the only adjustable parameters for which a set of biologically meaningful values was found ( see Parameter Estimation below ) ., See Table S2 for a full list of reaction rate constants for the full circadian model ., With the exception of the positive feedback model of the isolated per\/tim loop , the mathematical models presented in this paper were solved using the LSODAR integrator as part of the SloppyCell package 48 ., Periodic orbits were found through sequential integration cycles until a stable limit cycle was approached ., For the continuation analysis of model parameters , AUTO 2000 was used ., Since a small number of molecules may initiate positive feedback , the isolated per\/tim loop positive feedback model was solved stochastically using Gillespies algorithm ., The model results are an ensemble of trajectories for a given parameter set ( a randomly selected subset which is shown in Figure 4B ) , with the trajectory closest to the experimentally observed mean nuclear onset time used in Figure 3A ., The standard deviation of nuclear onset time was determined from this ensemble of trajectories ., Several Drosophila mutant phenotypes were represented by the detailed mathematical model ., The parameter changes used to represent the mutants described in the paper are shown in Table S3 ., A typical result is shown for the arrhythmic dbtp\/dbtar mutants in Figure S2 ., The transient trajectory from a point on the wild-type constant darkness limit cycle illustrates the approach to a stable fixed point solution ., Similarly , all arrhythmic mutants presented in Table 1 were found to approach stable fixed points ( unpublished data ) ., The points and error bars presented in Figure 3 were the result of averaging the five trajectories for cytoplasmic PER\u2013TIM dimers and nuclear PER shown in Figure 1B of Meyer et al . 21 ., These trajectories were normalized to a minimum of zero and maximum of one prior to aligning the paired PER\u2013TIM and nuclear PER trajectories by minimizing the root mean-squared distance ., The mean onset time of the average of the aligned trajectories was then set to 340 min ., The points and error bars in Figure 6 were adapted from the publications listed in Table S4 ., With the exception of pdp mRNA and protein , multiple references were available ., These datasets were interpolated and averaged to produce the means and standard deviations presented in Figure 6 ., pdp mRNA and protein means and error bars were taken directly from 36 ., The points and error bars in Figure 7 are the average and standard deviation of experimental observations of the period of oscillation in response to changes in per dosage 29 , 30 and tim dosage 17 , 31 ., A Monte Carlo random walk , guided by importance sampling , adjusted model parameters to optimize a chi-squared value quantifying the consistency of the model results with available experimental observations ( discussed in the previous section and presented in Figures 3 , 6 , and 7 ) ., Model parameters were manually adjusted to biologically meaningful values where necessary .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"In this study we present a detailed , mechanism-based mathematical framework of Drosophila circadian rhythms ., This framework facilitates a more systematic approach to understanding circadian rhythms using a comprehensive representation of the network underlying this phenomenon ., The possible mechanisms underlying the cytoplasmic \u201cinterval timer\u201d created by PERIOD\u2013TIMELESS association are investigated , suggesting a novel positive feedback regulatory structure ., Incorporation of this additional feedback into a full circadian model produced results that are consistent with previous experimental observations of wild-type protein profiles and numerous mutant phenotypes .","summary":"The ability of an organism to adapt to daily changes in the environment , via a circadian clock , is an inherently interesting phenomenon recently connected to several human health issues ., Decades of experiments on one of the smallest model animals , the fruit fly Drosophila , has illustrated significant similarities with the mammal circadian system ., Within Drosophila , the PERIOD and TIMELESS proteins are central to controlling this rhythmicity and were recently shown to have a rapid and stable association creating an \u201cinterval\u201d timer in the cells cytoplasm ., This interval timer creates the necessary delay between the expression and activity of these genes , and is directly opposed to the previous hypothesis of a delay created by slow association ., We use several mathematical models to investigate the unknown factors controlling this timer ., Using a novel positive feedback loop , we construct a circadian model consistent with the interval timer and many wild-type and mutant experimental observations ., Our results suggest several novel genes and interactions to be tested experimentally .","keywords":"drosophila, biochemistry, computational biology, cell biology","toc":null} +{"Unnamed: 0":2301,"id":"journal.pcbi.1004692","year":2016,"title":"Ensemble Tractography","sections":"Tractography uses diffusion-weighted magnetic resonance imaging ( diffusion MRI ) data to identify specific white matter fascicles as well as the connections these fascicles make between cortical regions 1\u20136 ., Specifying the pattern of connections between brain regions ( \u201cconnectome\u201d ) is a fundamental goal of neuroscience 7\u20139 ., One of the major goals of tractography is to establish a model of the complete collections of white matter tracts and connections ( \u201cstructural connectome\u201d , also referred as \u201ctractogram\u201d ) in the human brain ., Hereafter , we refer to structural connectomes estimated using tractography as \u201cconnectomes\u201d or \u201cconnectome models\u201d ., A variety of tractography algorithms are in wide use 10\u201318 ( see \u201cRelated literature\u201d in Discussion ) ., These algorithms calculate streamlines ( also called \u201cestimated fascicles\u201d ) through the white matter using somewhat different principles ., Some tractography methods ( local tractography ) calculate streamlines by tracking the orientation of diffusion signal locally and step-wise based on deterministic 10 , 19\u201321 or probabilistic selection methods 11 , 12 ., Other tractography methods ( global tractography ) reconstruct the trajectory of streamlines based on goodness-of-fit to diffusion signals 16 , 22\u201331 ., Each algorithm offers some advantages and disadvantages ., For any tractography method , investigators must set parameter values ., Key tractography parameters include maximum and minimum streamline length , seed selection , and stopping criteria for terminating a streamline , and the minimum radius of curvature allowed for building each streamline ., Differences in parameter values yield differences in streamlines 32\u201339 ., The parameter dependency of tractography has been observed in both local and global tractography algorithms 34 ., In common practice , investigators choose an algorithm and set fixed parameter values in the hope of optimizing streamlines for general use ., However , recent studies 40 , 41 demonstrated that no algorithm or parameter values are optimal across all conditions ., Specifically , Chamberland and colleagues 41 show that the best choice depends on a variety of factors such as the specific region of white matter or the specific tract being studied ., For example , Fig 1 compares two tracts and shows how the best parameter value differs ., Tracts between nearby regions on the cortical surface have short association fibers with relatively high curvature ( U-fiber; left panels in Fig 1 ) ., To identify U-fibers investigators must set parameters that allow tracts with high curvature ( top panels in Fig 1 ) ., In contrast , the major fascicles of the brain , such as the Inferior Longitudinal Fasciculus ( ILF ) or the Superior Longitudinal Fasciculus ( SLF ) , have relatively long and straight cores ., Better estimates of the core of these tracts are obtained by sampling streamlines with relatively low curvature ( middle panels in Fig 1 ) ., Additional factors affecting the optimal parameter choice for streamline generation may include diffusion MRI acquisition parameters ( e . g . , b-value , voxel size and angular resolution ) ., In general , no single parameter value may capture the full range of streamlines globally in every brain ., In the machine learning and statistical classification literature , it has been shown that for large and heterogeneous data sets combining multiple types of classifiers improves performance over single classifier methods ( Ensemble methods 42\u201344 , see 45 for a review ) ., The human white matter provides similar challenges , because it contains large sets of heterogeneous fascicles different in length , volume and curvature ., Given the complexity of human white matter , ensemble methods incorporating a range of tractography algorithms and parameters may be a valuable approach for improving tractography performance ., The idea of incorporating tracts from multiple sources in the initial construction of a connectome has been suggested in earlier publications 27 , 31 ., We describe an ensemble method , which we call Ensemble Tractography ( ET ) , to reduce problems arising from single algorithm and parameter selection ., We illustrate the method with an example that addresses the parameter selection problem ., First , we create a set of connectomes , each generated using a different parameter setting ., These are called single parameter connectomes ( SPCs ) ., We then combine streamlines from multiple SPCs into a new candidate connectome , and we use Linear Fascicle Evaluation ( LiFE 46 ) to optimize this connectome and eliminate redundant streamlines ., We call the result the Ensemble Tractography Connectome ( ETC ) ., Fig 2 shows a flow diagram of the ET algorithm ., We report two key findings ., ETCs ( 1 ) include streamlines that span a wider range of curvatures as compared to any of the SPCs , including both short- and long-range fibers ( bottom panel in Fig 1 ) , and ( 2 ) ETCs predict the diffusion signal more accurately than any SPC ., To support reproducible research , the algorithm implementation and example data sets are made available at an open website ( http:\/\/purl . stanford . edu\/qw092zb0881 ) ., We now return to the example in Fig 1 ., All connectomes in Fig 1 were optimized using LiFE ., The left-panels show U-fibers connecting two adjacent cortical regions , V3A\/B and V3d ( see Materials and Methods and S1 Fig ) ., The SPCs with high ( 1\/0 . 25 mm ) and low ( 1\/2 mm ) curvature parameters return very different results ., The high curvature parameter SPC includes many streamlines , and the low curvature SPC has very few streamlines ., The right-panels show estimates of the relatively long-range projections that make up the inferior longitudinal fasciculus ( ILF ) ., In this case the situation is reversed: the high curvature SPC has many fewer streamlines than the low curvature SPC ., Moreover , the terminations of these streamlines do not show the same branching pattern and do not extend into the occipital lobe ., The images in the bottom panels of Fig 1 show the streamlines in the optimized ETC ., The ETC model includes many U-fiber streamlines , similar to the 0 . 25 mm SPC ., The estimated ILF contains the same branching pattern that extends into the occipital lobe as the 2 mm SPC ., The color of the individual ETC streamlines indicates its SPC origin ., The ETC estimates of the U-fibers include streamlines mainly from SPC that permit high curvature ( 0 . 25 mm ) ., The optimized ILF includes streamlines mainly from SPCs with lower curvature ( 1 to 4 mm ) ., The ETC includes streamlines from all of the SPCs ., Nominally , the curvature parameter is a bound\u2014one should not have higher curvature than the specified level 18 ., In practice , however , we find that the bound impacts many properties of the candidate connectome ., We illustrate the effect of the curvature threshold on each SPC in the occipital white matter of the 10 hemispheres in STN96 dataset ( Fig 3; see Materials and Methods; S2 Fig depicts white matter regions used for the analysis ) ., For each of the bounds we tested , the candidate and optimized connectome curvatures form compact , single-peaked distributions; the peak increases monotonically as the minimum radius of curvature increases ( see S3 Fig for distribution in candidate connectomes ) ., When the curvature bound is high ( small radius of curvature ) , the candidate connectome streamlines tend to have a relatively high mean curvature ., When the curvature bound is low ( high radius of curvature ) , the candidate connectome tends to have a relatively low mean curvature ., Thus , the curvature parameter is not simply a threshold; it influences the distribution of streamline curvatures in the optimized and candidate connectomes ., For this reason , setting a lenient bound on the curvature ( i . e . , a low value of the minimum radius of curvature ) does not yield a good representation of long-straight fascicles ( Fig 1 ) ., Conversely , setting a strict bound on the curvature ( i . e . , a high value of minimum radius of curvature ) eliminates U-fibers from the candidate connectome ., We confirmed that the lenient bound on the curvature does not produce many straight streamlines using other tractography algorithm implemented in a different software package ( PICo 11; S4 Fig , S1 Text , Section 1 ) ., To reduce the curvature bias present in each SPC , the candidate connectome for the ETC combines samples from multiple SPCs whose parameters span a significant curvature range ( thick orange line; Fig 3 ) ., Hence , the ETC strategy is effective in the sense that ETCs include streamlines with a broader range of curvatures ., The optimized ETC includes more streamlines than any of the optimized SPCs ( Fig 4a ) ., Importantly , nearly twice as many streamlines from the candidate ETC survive the LiFE process and contribute to the diffusion signal predictions ., Typically streamlines generated using whole brain tractography do not pass through all of the voxels in the white matter ., For very simple algorithms , such as deterministic tracking based on diffusion tensors 10 , as many as 17% of the white matter voxels contain no streamlines ( see S8c Fig ) ., We show that ETC streamlines pass through a larger percentage of white matter voxels than any of the individual SPCs ( Fig 4b ) ., The streamlines in SPCs ( based on CSD and probabilistic tractography methods 18 ) cover up to 95% of the white matter , whereas streamlines in the ETC cover up to 98% of the white matter ., Because in reality the entire white matter volume contains streamlines , this result suggests that ET recovers more information from the diffusion data ., The failure to find streamlines in about 2% of the voxels shows that we continue to miss some fascicles ., While the number of ETC streamlines is nearly twice the number in any SPC , the white matter coverage is only about 3 percent greater ., It follows that the number of streamlines per white matter voxel in the ETC is larger than the number in any of the SPCs ., Whereas the mean number of streamlines per voxel in the SPCs is around 13 , the mean in the ETC is nearly 18 ., Fig 4c shows a histogram that counts the number of streamlines in each voxel , comparing the 2 mm SPC and the ETC ., Notice that many of the voxels ( 77 . 9% voxels on average ) have more streamlines in the ETC ., The larger number of streamlines within each voxel implies that the ETC streamlines can predict more complex diffusion orientation distribution functions ., S5 Fig describes the example crossing fascicle voxel in which ETC predicts diffusion signal significantly better than SPC ., This is because each streamline can point in a slightly different direction and thus potentially predict diffusion in more directions ., Coupled with the greater coverage across white matter voxels , the ETC should be able to provide a better prediction of the diffusion signal ., Next , we compare SPC and ETC connectome accuracy ( Fig 5 ) ., Accuracy is evaluated as the ratio of root mean square error between model and data to the test-retest reliability ( Rrmse 46\u201348; see Eq 3 in Materials and Methods ) ., Fig 5a shows a two-dimensional histogram comparing the accuracy of the ETC and the 2 mm SPC in a single , typical subject ., For large portions of the white matter ( 62 . 4% voxels in Fig 5a ) , accuracy is higher ( Rrmse lower ) for the ETC than the SPC ., Fig 5b describes the median Rrmse of the 6 connectome models ( SPC; 0 . 25 , 0 . 5 , 1 , 2 and 4 mm and ETC ) across all ten occipital lobes ., The median ETC accuracy is significantly higher than any of the SPCs ., S6 Fig compares the prediction accuracy of ETC and SPC ( 2 mm ) and tests whether increasing the size of the candidate SPC reduces the primacy of the ETC over the SPC ( see S1 Text , Section 2 ) ., In this comparison , we matched the size of candidate SPC to that of ETC ( 800 , 000 streamlines; BigSPC model; see S1 Text , Section 2 ) ., The optimized BigSPC supports as many streamlines as the ETC ( S6b Fig ) but the ETC covers a larger portion of the total white matter volume ( S6c Fig ) ., Importantly , the prediction accuracy of ETC is consistently higher than BigSPC ( S6d Fig; see S6e Fig for comparison in individual hemispheres ) ., Fig 6 compares connectome model accuracy between different white matter pathways ( U-fiber and the ILF , as shown in Fig 1 ) ., We compared the accuracy of six connectome models in the voxels defined by the best U-fiber ( Fig 6a , left , ETC U-fiber ) and ILF ( Fig 6b , left , ETC ILF ) within the same hemisphere of the same subject ., In all SPC models , 0 . 25 mm curvature threshold produces the best performance as compared with other thresholds in the U-fiber voxels , whereas the 4 mm SPC performs better than others in the ILF voxels ( Fig 6b ) ., This shows that the best SPC differs between white matter pathways and brain volumes ., In both U-fiber and the ILF , ETC model performs similarly or better than the best SPC model ( Fig 6 ) ., Testing the ETC performance in the total white matter volume is computationally demanding , because of the increase of the matrix size in LiFE with ET ( see the recent paper 49 for computational load of LiFE ) ., For example , if we combine five whole-brain SPCs including 2 million streamlines , the candidate ETC size is 10 million streamlines ., In order to generate whole-brain ETC model , we used the ETC-preselection method ( see S1 Text , Section 5 ) ., Briefly , we selected streamlines from each SPC with highest weight ( best contributing to predicting the diffusion signal ) to build the candidate ETC ., This ETC-preselection method reduces the size of the candidate ETC , but produces better prediction accuracy as compared with any SPC ( S10 Fig ) ., Using ETC-preselection method , we optimized the whole-brain ETCs in five brains ( Fig 7 ) ., We compared properties of preselected ETC with those of the SPCs ., Consistent with results in occipital white matter ( Figs 4 and 5 ) , the whole-brain ETC supports a larger number of streamlines ( Fig 7a ) , covers larger portion of white matter ( Fig 7b ) and predicts the diffusion signal better than any of the SPCs ( Fig 7c ) ., Fig 7d shows maps of measured ( Data 1 and 2 ) and predicted diffusion signal for a single diffusion direction using two connectome models ( SPC 0 . 25 mm and ETC with preselection ) ., The result suggests that the ET approach is also effective for whole-brain connectome analysis ., We evaluated ET also using data from the Human Connectome Project ( HCP90 50; see Materials and Methods ) ., Consistent with results obtained on the STN96 data set , ET included a wider range of curvatures ( S7b Fig ) , increased streamline count and white matter coverage ( S7a and S7c Fig ) , and higher accuracy for predicting diffusion signal ( S7d Fig ) ., ETC on HCP90 dataset also supports example short- and long-range fascicles , U-fiber and the ILF ( S7e Fig ) , as identified on the STN96 data ., Thus , the properties of ET are consistent between these different datasets ., In addition to the ET method described above , we also used the ET method to create candidate connectomes that include streamlines from different algorithms ( Tensor deterministic , CSD deterministic and CSD probabilistic in MRtrix 18; see S1 Text , Section 3 ) ., The optimized connectomes from the ensemble of these algorithms had better prediction accuracy , and both increased streamline count and white matter coverage ( S8 Fig ) ., We also observed that the ETC generated using an ensemble of Fiber Orientation Distribution ( FOD ) amplitude cutoff parameters had better prediction accuracy as compared with SPCs ( S9 Fig; S1 Text , Section 4 ) ., Hence , we find substantial evidence across different diffusion datasets , tractography methods and parameters sets that ET improves the connectome model ., There is an enormous space of possible methods for creating candidate ETCs ., The method for creating ensembles will need to evolve over many experiments from different laboratories ., This paper presents one simple ET architecture that we found to be effective and efficient; just adding all streamlines from each parameter setting and optimize the ETC ., One of the disadvantages of the ETC method presented in this paper is the computational demand required in building large candidate sets ., In the following we discuss alternative architectures that we considered ., S1 Text ( Section 5 ) proposes one alternative ET method; ETC-preselection ., In this method , we chose 20% of streamlines contributing diffusion signal prediction from each of the individually optimized SPCs to build a new candidate ETC ., The advantage of this method is that the resulting size of new candidate ETC becomes equal to that of original candidate SPCs ., The disadvantage of this method is that we must evaluate ( using LiFE ) individually each SPC and also the ETC ., Our results show that ETC-preselection performs significantly better than SPCs , and only slightly worse than ETC without preselections ( S10 Fig ) ., Preselection is particularly useful for whole-brain models including large streamline sets ( Fig 7 ) , but not necessarily the best for connectome models with smaller size ., As it is impossible to evaluate all possible ET algorithms in an initial paper , we describe the method and provide an open-source implementation ( francopestilli . github . io\/life\/; github . com\/brain-life\/life\/ ) to the community for exploration of the many possible options ., Several groups have analyzed tractography limitations , including parameter and algorithm dependence 32\u201340 ., Bastiani and colleagues 34 analyzed how parameter and tractography algorithms influence connectomes and network properties ., Their paper and others motivates the need for a means of deciding which solutions are best supported by the data 46 , 51\u201355 ( see also 56 ) ., Several other groups also noted that the best parameter differs between different white matter pathways 40 , 41 ., BlueMatter 27 used streamlines generated by three different algorithms ( STT 20 , TEND 21 , ConTrack 16 ) to create a candidate connectome ., An important difference is that the BlueMatter algorithm could only be run on a supercomputer ( BlueGene\/L , a 2048-processor supercomputer with 0 . 5 TB of memory ) , while the current ET algorithm using LiFE runs on a personal computer 49 ., This advance enables investigators to systematically combine streamlines from many different parameters and algorithms and adopt ensemble tractography into their daily work flow ., This paper is the first systematic exploration to sweep out several key parameters ( curvature , stopping criterion ) in tractography and demonstrate the advantage of ensemble methods in terms of anatomy ( Fig 1 ) and prediction accuracy for diffusion signal ( Figs 5 and 7 ) ., A number of groups compared tractography with an independent measurement , such as invasive tract tracing or manganese enhanced MRI in macaques or mice 39 , 40 , 57\u201360 ., For example , Thomas et al . 22 collected a diffusion data set in one macaque and compared the results of several single parameter connectomes with tracer measurements from a different macaque ., This comparison has several limitations ., First , the tracer measurements depend upon factors including the tracer type ( e . g . , anterograde or retrograde ) and the selection of planes and injection sites; hence , they can differ substantially ( e . g . 61 , 62 ) ., When the methods disagree , it is often best to assemble a conclusion from multiple studies ., Second , comparisons in a particular data set do not guarantee validation in a different experiment ., For example , we cannot use high-resolution human adult brain fMRI data acquired in 7T scanner to support conclusions made from lower resolution fMRI data in children acquired using a 1 . 5 T scanner ., Each methodology requires means for stating both the conclusions and the strength of the support for those conclusions ., It is best to integrate fully justified findings derived by a variety of methods rather than discarding one method or another ., Others have proposed to evaluate tractography by defining ground truth using synthetic phantoms 31 , 63\u201366 ., Some investigators have pointed out the logical limitations of this approach 5 ., We agree that there are limitations to using phantoms for testing tractography but that in some cases synthetic phantoms can be valuable for analyzing computational methods ., Unfortunately , for our current work none of the currently available phantoms can be used ., This is because most phantoms have been generated using either single tractography parameters 67 or simple fiber configurations 63 ., Close and colleagues 68 provide software for generating numerical phantoms that can simulate complex fiber organization ., However , their method was not proposed to evaluate tractography performance by comparison with ground truth ., This fact makes it impossible for us to use the current phantoms to test the superiority of multiple tractography approaches such as ET to resolving multiple types of fiber configurations simultaneously ., The potential value of creating connectomes from a collection of tractography methods was mentioned by both Sherbondy et al . 27 and Lemkaddem et al . 31 ., Here , we provide a specific , open-source , implementation , and we begin a systematic analysis of this methodology ., The analyses show that ET based on sweeping out the curvature parameter has the specific benefit of creating connectomes with both short- and long-range fascicles ., In addition , the ET method produces more fascicles , larger coverage , and a better cross-validated prediction error ., In this paper , we described the advantage of combining multiple tractography parameters and algorithms in order to improve the accuracy of connectome models ., We use several example parameters and algorithms as a target for ET applications , and there are likely to be other beneficial combinations of algorithms and parameters which will be tested in future work ., For example , we could combine connectomes by sweeping out two different parameters , or combine connectomes generated by different software packages that implement different algorithms , or combine connectome generated by using different seeding strategy tested in the literature 38 , 65 , 69 ., Although it is impossible to test every pattern of combinations in this paper , we made LiFE software open ( http:\/\/francopestilli . github . io\/life\/; https:\/\/github . com\/brain-life\/life\/ ) to help other researchers test different ET architectures ., Future studies by multiple research groups will clarify the optimal ET architecture in both model accuracy and computational efficiency ., ET will be generally applicable for many different proposed tractography algorithms ., Several groups proposed generating streamlines based on the goodness-of-fit on the local diffusion signals ( global tractography; 16 , 22\u201331 , 53 ) ., While global tractography has advantages compared with local tractography 70 , it too requires the user to set the parameters and this produces a parameter-selection dependency 34 ., The ET approach will be effective in supporting both local and global tractography ., Current tractography uses a fixed set of parameters to generate each streamline ., However , several fascicles , such as many within the optic radiation , include both curving and straight sections 71\u201374 ., When this is known a priori , it may be more accurate to change the tractography parameter along one fascicles , allowing high and low curvature in the relevant portions of the tract ., LiFE and ET will provide the opportunity to evaluate the model accuracy of new tractography tools in terms of the prediction accuracy on diffusion signal ., It is widely agreed that diffusion MRI contributes useful information about the large and long-range fasciculi in the human brain 75\u201378 ., Meanwhile , the existence of U-fiber system has been supported 79 , 80 , but not extensively studied in the literature presumably because of the limitations in tractography parameter selections ., The optimized ETCs extend tractography to include both long- and short-range fascicles in a single connectome , improving on the optimized SPCs which include one or the other ., The higher model-accuracy and the inclusion of both short- and long-range fibers is a validation that the optimized ETC improves on any SPC ., The preliminary ET results are encouraging , but they will surely benefit from further optimization ., Tracer studies are not well-suited to identifying long-range pathways in the human brain ., Even in animal models , with more than a century of history , recent tracer measurements challenge conventional thinking about long-range pathways ., Reports describing many new found projections demonstrate that the field is active and evolving 62 , 81 , 82 ., The progress in human tractography complements the strengths of tracer studies in animal models ., Ultimately , combining insights from these technologies will provide a more complete view of human brain anatomy and function ., We used two magnetic resonance diffusion imaging datasets ., The STN96 dataset was acquired at the Stanford Center for Neurobiological Imaging ( CNI ) ; the HCP90 dataset was acquired by the Human Connectome Consortium 50 ., We identified several tracts within each optimized connectome to compare how different connectome represents anatomical features of the white-matter fascicles ., All figures of brain anatomy and fascicles were made using Matlab Brain Anatomy ( www . github . com\/francopestilli\/mba ) ., We evaluated model accuracy for whole-brain connectomes ., To do so , we generated five 2-million streamlines candidate SPCs by using different curvature thresholds ( from 0 . 25 mm to 4 mm ) ., We then used LiFE to assign a weight to each streamline ., Next , we selected the top 400 , 000 streamlines with highest weight from each SPC ( preselection method; see S1 Text , Section 5 ) ., This resulted in an ETC connectome containing 2 million streamlines ., Finally , we optimized this ETC using LiFE ., The processing of one whole-brain connectome model with 2 million streamlines requires 28 . 4 hours on a computer with 16 processing cores and 32GB Random Access Memory ., The ILF extends outside the occipital white matter region used for the main analysis ( S2 Fig ) ., In order to evaluate the connectome model along these fascicles , we selectively fitted LiFE to white matter voxels containing these tracts ., To do so , we ( 1 ) identified the ILF from candidate connectome in all connectome models using the identification method described above , ( 2 ) concatenated all streamlines identified as ILF across multiple connectome models , ( 3 ) extracted the voxels in which any of streamlines are passing through ., Finally we obtained a white matter region covering the ILF ., LiFE analysis on the ILF is limited to these portions of white matter in all connectome models tested .","headings":"Introduction, Results, Discussion, Materials and Methods","abstract":"Tractography uses diffusion MRI to estimate the trajectory and cortical projection zones of white matter fascicles in the living human brain ., There are many different tractography algorithms and each requires the user to set several parameters , such as curvature threshold ., Choosing a single algorithm with specific parameters poses two challenges ., First , different algorithms and parameter values produce different results ., Second , the optimal choice of algorithm and parameter value may differ between different white matter regions or different fascicles , subjects , and acquisition parameters ., We propose using ensemble methods to reduce algorithm and parameter dependencies ., To do so we separate the processes of fascicle generation and evaluation ., Specifically , we analyze the value of creating optimized connectomes by systematically combining candidate streamlines from an ensemble of algorithms ( deterministic and probabilistic ) and systematically varying parameters ( curvature and stopping criterion ) ., The ensemble approach leads to optimized connectomes that provide better cross-validated prediction error of the diffusion MRI data than optimized connectomes generated using a single-algorithm or parameter set ., Furthermore , the ensemble approach produces connectomes that contain both short- and long-range fascicles , whereas single-parameter connectomes are biased towards one or the other ., In summary , a systematic ensemble tractography approach can produce connectomes that are superior to standard single parameter estimates both for predicting the diffusion measurements and estimating white matter fascicles .","summary":"Diffusion MRI and tractography opened a new avenue for studying white matter fascicles and their tissue properties in the living human brain ., There are many different tractography methods , and each requires the user to set several parameters ., A limitation of tractography is that the results depend on the selection of algorithms and parameters ., Here , we analyze an ensemble method , Ensemble Tractography ( ET ) , that reduces the effect of algorithm and parameter selection ., ET creates a large set of candidate streamlines using an ensemble of algorithms and parameter values and then selects the streamlines with strong support from the data using a global fascicle evaluation method ., Compared to single parameter connectomes , ET connectomes predict diffusion MRI signals better and cover a wider range of white matter volume ., Importantly , ET connectomes include both short- and long-association fascicles , which are not typically found together in single-parameter connectomes .","keywords":"medicine and health sciences, diagnostic radiology, tractography, nervous system, applied mathematics, brain, neuroscience, cerebral hemispheres, occipital lobe, magnetic resonance imaging, algorithms, simulation and modeling, left hemisphere, brain morphometry, mathematics, brain mapping, neuroimaging, research and analysis methods, diffusion magnetic resonance imaging, imaging techniques, connectomics, radiology and imaging, diagnostic medicine, neuroanatomy, diffusion tensor imaging, anatomy, central nervous system, biology and life sciences, physical sciences, cerebral cortex","toc":null}